Search results for: sequence evolution
530 Paramecuim as a Model for the Evaluation of Toxicity (Growth, Total Proteins, Respiratory and GSH Bio Marker Changes) Observed after Treatment with Essential Oils Isolated from Artemisia herba-alba Plant of Algeria
Authors: Bouchiha Hanene, Rouabhi Rachid, Bouchama Khaled, Djebar Berrebbah Houraya, Djebar Mohamed Reda
Abstract:
Recently, some natural products such as essentials oils (EOs) have been used in the fields as alternative to synthetic compounds, to minimize the negative impacts to the environment. This fact has led to questions about the possible impact of EOs on ecosystems. Currently in toxicology, the use of alternative models can help to understand the mechanisms of toxic action, at different levels of organization of ecosystems. Algae, protozoa and bacteria form the base of the food chain and protozoan cells are used as bioindicators often of pollution in environment. Unicellular organisms offer the possibility of direct study of independent cells with specific characteristics of individual cells and whole organisms at the same time. This unicellular facilitates the study of physiological processes, and effects of pollutants at the cellular level, which makes it widely used to assess the toxic effects of various xenobiotics. This study aimed to verify the effects of EOs of one famous plant used tremendously in our folk medicine, namely Artemisia herba alba in causing acute toxicity (24 hours) and chronic (15 days) toxicity for model cellular (Paramecium sp). To this end, cellular’s of paramecium were exposed to various concentrations (Three doses were chosen) of EOs extracted from plant (Artemisia herba alba). In the first experiment, the cellular s cultures were exposed for 48 hours to different concentrations to determine the median lethal concentration (DL50). We followed the evolution of physiological parameters (growth), biochemical (total proteins, respiratory metabolism), as well as the variations of a bio marker the GSH. Our results highlighted a light inhibition of the growth of the protozoa as well as a disturbance of the contents of total proteins and a reduction in the reduced rate of glutathione. The polarographic study revealed a stimulation of the consumption of O2 and this at the treated cells.Keywords: essential oils, protozoa, bio indicators, toxicity, Growth, bio marker, proteins, polarographic
Procedia PDF Downloads 346529 Structural and Biochemical Characterization of Red and Green Emitting Luciferase Enzymes
Authors: Wael M. Rabeh, Cesar Carrasco-Lopez, Juliana C. Ferreira, Pance Naumov
Abstract:
Bioluminescence, the emission of light from a biological process, is found in various living organisms including bacteria, fireflies, beetles, fungus and different marine organisms. Luciferase is an enzyme that catalyzes a two steps oxidation of luciferin in the presence of Mg2+ and ATP to produce oxyluciferin and releases energy in the form of light. The luciferase assay is used in biological research and clinical applications for in vivo imaging, cell proliferation, and protein folding and secretion analysis. The luciferase enzyme consists of two domains, a large N-terminal domain (1-436 residues) that is connected to a small C-terminal domain (440-544) by a flexible loop that functions as a hinge for opening and closing the active site. The two domains are separated by a large cleft housing the active site that closes after binding the substrates, luciferin and ATP. Even though all insect luciferases catalyze the same chemical reaction and share 50% to 90% sequence homology and high structural similarity, they emit light of different colors from green at 560nm to red at 640 nm. Currently, the majority of the structural and biochemical studies have been conducted on green-emitting firefly luciferases. To address the color emission mechanism, we expressed and purified two luciferase enzymes with blue-shifted green and red emission from indigenous Brazilian species Amydetes fanestratus and Phrixothrix, respectively. The two enzymes naturally emit light of different colors and they are an excellent system to study the color-emission mechanism of luciferases, as the current proposed mechanisms are based on mutagenesis studies. Using a vapor-diffusion method and a high-throughput approach, we crystallized and solved the crystal structure of both enzymes, at 1.7 Å and 3.1 Å resolution respectively, using X-ray crystallography. The free enzyme adopted two open conformations in the crystallographic unit cell that are different from the previously characterized firefly luciferase. The blue-shifted green luciferase crystalized as a monomer similar to other luciferases reported in literature, while the red luciferases crystalized as an octamer and was also purified as an octomer in solution. The octomer conformation is the first of its kind for any insect’s luciferase, which might be relate to the red color emission. Structurally designed mutations confirmed the importance of the transition between the open and close conformations in the fine-tuning of the color and the characterization of other interesting mutants is underway.Keywords: bioluminescence, enzymology, structural biology, x-ray crystallography
Procedia PDF Downloads 326528 Strategic Metals and Rare Earth Elements Exploration of Lithium Cesium Tantalum Type Pegmatites: A Case Study from Northwest Himalayas
Authors: Auzair Mehmood, Mohammad Arif
Abstract:
The LCT (Li, Cs and Ta rich)-type pegmatites, genetically related to peraluminous S-type granites, are being mined for strategic metals (SMs) and rare earth elements (REEs) around the world. This study investigates the SMs and REEs potentials of pegmatites that are spatially associated with an S-type granitic suite of the Himalayan sequence, specifically Mansehra Granitic Complex (MGC), northwest Pakistan. Geochemical signatures of the pegmatites and some of their mineral extracts were analyzed using Inductive Coupled Plasma Mass Spectroscopy (ICP-MS) technique to explore and generate potential prospects (if any) for SMs and REEs. In general, the REE patterns of the studied whole-rock pegmatite samples show tetrad effect and possess low total REE abundances, strong positive Europium (Eu) anomalies, weak negative Cesium (Cs) anomalies and relative enrichment in heavy REE. Similar features have been observed on the REE patterns of the feldspar extracts. However, the REE patterns of the muscovite extracts reflect preferential enrichment and possess negative Eu anomalies. The trace element evaluation further suggests that the MGC pegmatites have undergone low levels of fractionation. Various trace elements concentrations (and their ratios) including Ta versus Cs, K/Rb (Potassium/Rubidium) versus Rb and Th/U (Thorium/Uranium) versus K/Cs, were used to analyze the economically viable mineral potential of the studied rocks. On most of the plots, concentrations fall below the dividing line and confer either barren or low-level mineralization potential of the studied rocks for both SMs and REEs. The results demonstrate paucity of the MGC pegmatites with respect to Ta-Nb (Tantalum-Niobium) mineralization, which is in sharp contrast to many Pan-African S-type granites around the world. The MGC pegmatites are classified as muscovite pegmatites based on their K/Rb versus Cs relationship. This classification is consistent with the occurrence of rare accessory minerals like garnet, biotite, tourmaline, and beryl. Furthermore, the classification corroborates with an earlier sorting of the MCG pegmatites into muscovite-bearing, biotite-bearing, and subordinate muscovite-biotite types. These types of pegmatites lack any significant SMs and REEs mineralization potentials. Field relations, such as close spatial association with parent granitic rocks and absence of internal zonation structure, also reflect the barren character and hence lack of any potential prospects of the MGC pegmatites.Keywords: exploration, fractionation, Himalayas, pegmatites, rare earth elements
Procedia PDF Downloads 203527 Multi-Criteria Evolutionary Algorithm to Develop Efficient Schedules for Complex Maintenance Problems
Authors: Sven Tackenberg, Sönke Duckwitz, Andreas Petz, Christopher M. Schlick
Abstract:
This paper introduces an extension to the well-established Resource-Constrained Project Scheduling Problem (RCPSP) to apply it to complex maintenance problems. The problem is to assign technicians to a team which has to process several tasks with multi-level skill requirements during a work shift. Here, several alternative activities for a task allow both, the temporal shift of activities or the reallocation of technicians and tools. As a result, switches from one valid work process variant to another can be considered and may be selected by the developed evolutionary algorithm based on the present skill level of technicians or the available tools. An additional complication of the observed scheduling problem is that the locations of the construction sites are only temporarily accessible during a day. Due to intensive rail traffic, the available time slots for maintenance and repair works are extremely short and are often distributed throughout the day. To identify efficient working periods, a first concept of a Bayesian network is introduced and is integrated into the extended RCPSP with pre-emptive and non-pre-emptive tasks. Thereby, the Bayesian network is used to calculate the probability of a maintenance task to be processed during a specific period of the shift. Focusing on the domain of maintenance of the railway infrastructure in metropolitan areas as the most unproductive implementation process at construction site, the paper illustrates how the extended RCPSP can be applied for maintenance planning support. A multi-criteria evolutionary algorithm with a problem representation is introduced which is capable of revising technician-task allocations, whereas the duration of the task may be stochastic. The approach uses a novel activity list representation to ensure easily describable and modifiable elements which can be converted into detailed shift schedules. Thereby, the main objective is to develop a shift plan which maximizes the utilization of each technician due to a minimization of the waiting times caused by rail traffic. The results of the already implemented core algorithm illustrate a fast convergence towards an optimal team composition for a shift, an efficient sequence of tasks and a high probability of the subsequent implementation due to the stochastic durations of the tasks. In the paper, the algorithm for the extended RCPSP is analyzed in experimental evaluation using real-world example problems with various size, resource complexity, tightness and so forth.Keywords: maintenance management, scheduling, resource constrained project scheduling problem, genetic algorithms
Procedia PDF Downloads 231526 A Study on Neighborhood of Dwelling with Historical-Islamic Architectural Elements
Authors: M.J. Seddighi, Moradchelleh, M. Keyvan
Abstract:
The ultimate goal in building a city is to provide pleasant, comfortable and nurturing environment as a context of public life. City environment establishes strong connection with people and their surrounding habitant, acting as relevance in social interactions between citizens itself. Urban environment and appropriate municipal facilities are the only way for proper communication between city and citizens and also citizens themselves.There is a need for complement elements between buildings and constructions to settling city life through which the move, comfort, reactions and anxiety will adjust and reflect the spirit to the city. In the surging development of society, urban’ spaces are encountered evolution, sometimes causing the symbols to fade and waste, and as a result, leading to destroy belongs among humans and their physical liquidate. Houses and living spaces exhibit materialistic reflection of life style. In the other words, way of life makes the symbolic essence of living spaces. In addition, it is of sociocultural factor of lifestyle, consisting the concepts and culture, morality, worldview, and national character. Culture is responsible for some crucial meaningful needs which can be wide because they depend on various causes such as perception and interpretation of believes, philosophy of life, interaction with neighbors and protection against climate and enemies. The bi-lateral relationship between human and nature is the main factor that needs to be properly addressed. It is because of the fact that the approach which is taken against landscape and nature has a pertinent influence on creation and shaping the structure of a house. The first response of human in tackling the environment is to build a “shelter” and place as dwelling. This has been a crucial factor in all time periods. In the proposed study, dwelling in Khorasgan’ Stream, as an area located in one of the important historical city of Iran, has been studied. Khorasgan’ Stream is the basic constituent elements of the present architectural form of Isfahan. The influence of Islamic spiritual culture and neighborhood with the historical elements on the dwelling of the selected location, subsequently on other regions of the town are presented.Keywords: dwelling, neighborhood, historical, Islamic, architectural elements
Procedia PDF Downloads 411525 Bayesian Parameter Inference for Continuous Time Markov Chains with Intractable Likelihood
Authors: Randa Alharbi, Vladislav Vyshemirsky
Abstract:
Systems biology is an important field in science which focuses on studying behaviour of biological systems. Modelling is required to produce detailed description of the elements of a biological system, their function, and their interactions. A well-designed model requires selecting a suitable mechanism which can capture the main features of the system, define the essential components of the system and represent an appropriate law that can define the interactions between its components. Complex biological systems exhibit stochastic behaviour. Thus, using probabilistic models are suitable to describe and analyse biological systems. Continuous-Time Markov Chain (CTMC) is one of the probabilistic models that describe the system as a set of discrete states with continuous time transitions between them. The system is then characterised by a set of probability distributions that describe the transition from one state to another at a given time. The evolution of these probabilities through time can be obtained by chemical master equation which is analytically intractable but it can be simulated. Uncertain parameters of such a model can be inferred using methods of Bayesian inference. Yet, inference in such a complex system is challenging as it requires the evaluation of the likelihood which is intractable in most cases. There are different statistical methods that allow simulating from the model despite intractability of the likelihood. Approximate Bayesian computation is a common approach for tackling inference which relies on simulation of the model to approximate the intractable likelihood. Particle Markov chain Monte Carlo (PMCMC) is another approach which is based on using sequential Monte Carlo to estimate intractable likelihood. However, both methods are computationally expensive. In this paper we discuss the efficiency and possible practical issues for each method, taking into account the computational time for these methods. We demonstrate likelihood-free inference by performing analysing a model of the Repressilator using both methods. Detailed investigation is performed to quantify the difference between these methods in terms of efficiency and computational cost.Keywords: Approximate Bayesian computation(ABC), Continuous-Time Markov Chains, Sequential Monte Carlo, Particle Markov chain Monte Carlo (PMCMC)
Procedia PDF Downloads 202524 Globalisation, Growth and Sustainability in Sub-Saharan Africa
Authors: Ourvashi Bissoon
Abstract:
Sub-Saharan Africa in addition to being resource rich is increasingly being seen as having a huge growth potential and as a result, is increasingly attracting MNEs on its soil. To empirically assess the effectiveness of GDP in tracking sustainable resource use and the role played by MNEs in Sub-Saharan Africa, a panel data analysis has been undertaken for 32 countries over thirty-five years. The time horizon spans the period 1980-2014 to reflect the evolution from before the publication of the pioneering Brundtland report on sustainable development to date. Multinationals’ presence is proxied by the level of FDI stocks. The empirical investigation first focuses on the impact of trade openness and MNE presence on the traditional measure of economic growth namely the GDP growth rate, and then on the genuine savings (GS) rate, a measure of weak sustainability developed by the World Bank, which assumes the substitutability between different forms of capital and finally, the impact on the adjusted Net National Income (aNNI), a measure of green growth which caters for the depletion of natural resources is examined. For countries with significant exhaustible natural resources and important foreign investor presence, the adjusted net national income (aNNI) can be a better indicator of economic performance than GDP growth (World Bank, 2010). The issue of potential endogeneity and reverse causality is also addressed in addition to robustness tests. The findings indicate that FDI and openness contribute significantly and positively to the GDP growth of the countries in the sample; however there is a threshold level of institutional quality below which FDI has a negative impact on growth. When the GDP growth rate is substituted for the GS rate, a natural resource curse becomes evident. The rents being generated from the exploitation of natural resources are not being re-invested into other forms of capital namely human and physical capital. FDI and trade patterns may be setting the economies in the sample on a unsustainable path of resource depletion. The resource curse is confirmed when utilising the aNNI as well, thus implying that GDP growth measure may not be a reliable to capture sustainable development.Keywords: FDI, sustainable development, genuine savings, sub-Saharan Africa
Procedia PDF Downloads 215523 SynKit: A Event-Driven and Scalable Microservices-Based Kitting System
Authors: Bruno Nascimento, Cristina Wanzeller, Jorge Silva, João A. Dias, André Barbosa, José Ribeiro
Abstract:
The increasing complexity of logistics operations stems from evolving business needs, such as the shift from mass production to mass customization, which demands greater efficiency and flexibility. In response, Industry 4.0 and 5.0 technologies provide improved solutions to enhance operational agility and better meet market demands. The management of kitting zones, combined with the use of Autonomous Mobile Robots, faces challenges related to coordination, resource optimization, and rapid response to customer demand fluctuations. Additionally, implementing lean manufacturing practices in this context must be carefully orchestrated by intelligent systems and human operators to maximize efficiency without sacrificing the agility required in an advanced production environment. This paper proposes and implements a microservices-based architecture integrating principles from Industry 4.0 and 5.0 with lean manufacturing practices. The architecture enhances communication and coordination between autonomous vehicles and kitting management systems, allowing more efficient resource utilization and increased scalability. The proposed architecture focuses on the modularity and flexibility of operations, enabling seamless flexibility to change demands and the efficient allocation of resources in realtime. Conducting this approach is expected to significantly improve logistics operations’ efficiency and scalability by reducing waste and optimizing resource use while improving responsiveness to demand changes. The implementation of this architecture provides a robust foundation for the continuous evolution of kitting management and process optimization. It is designed to adapt to dynamic environments marked by rapid shifts in production demands and real-time decision-making. It also ensures seamless integration with automated systems, aligning with Industry 4.0 and 5.0 needs while reinforcing Lean Manufacturing principles.Keywords: microservices, event-driven, kitting, AMR, lean manufacturing, industry 4.0, industry 5.0
Procedia PDF Downloads 22522 An Overview of Posterior Fossa Associated Pathologies and Segmentation
Authors: Samuel J. Ahmad, Michael Zhu, Andrew J. Kobets
Abstract:
Segmentation tools continue to advance, evolving from manual methods to automated contouring technologies utilizing convolutional neural networks. These techniques have evaluated ventricular and hemorrhagic volumes in the past but may be applied in novel ways to assess posterior fossa-associated pathologies such as Chiari malformations. Herein, we summarize literature pertaining to segmentation in the context of this and other posterior fossa-based diseases such as trigeminal neuralgia, hemifacial spasm, and posterior fossa syndrome. A literature search for volumetric analysis of the posterior fossa identified 27 papers where semi-automated, automated, manual segmentation, linear measurement-based formulas, and the Cavalieri estimator were utilized. These studies produced superior data than older methods utilizing formulas for rough volumetric estimations. The most commonly used segmentation technique was semi-automated segmentation (12 studies). Manual segmentation was the second most common technique (7 studies). Automated segmentation techniques (4 studies) and the Cavalieri estimator (3 studies), a point-counting method that uses a grid of points to estimate the volume of a region, were the next most commonly used techniques. The least commonly utilized segmentation technique was linear measurement-based formulas (1 study). Semi-automated segmentation produced accurate, reproducible results. However, it is apparent that there does not exist a single semi-automated software, open source or otherwise, that has been widely applied to the posterior fossa. Fully-automated segmentation via such open source software as FSL and Freesurfer produced highly accurate posterior fossa segmentations. Various forms of segmentation have been used to assess posterior fossa pathologies and each has its advantages and disadvantages. According to our results, semi-automated segmentation is the predominant method. However, atlas-based automated segmentation is an extremely promising method that produces accurate results. Future evolution of segmentation technologies will undoubtedly yield superior results, which may be applied to posterior fossa related pathologies. Medical professionals will save time and effort analyzing large sets of data due to these advances.Keywords: chiari, posterior fossa, segmentation, volumetric
Procedia PDF Downloads 106521 Gassing Tendency of Natural Ester Based Transformer oils: Low Alkane Generation in Stray Gassing Behaviour
Authors: Thummalapalli CSM Gupta, Banti Sidhiwala
Abstract:
Mineral oils of naphthenic and paraffinic type have been traditionally been used as insulating liquids in the transformer applications to protect the solid insulation from moisture and ensures effective heat transfer/cooling. The performance of these type of oils have been proven in the field over many decades and the condition monitoring and diagnosis of transformer performance have been successfully monitored through oil properties and dissolved gas analysis methods successfully. Different type of gases representing various types of faults due to components or operating conditions effectively. While large amount of data base has been generated in the industry on dissolved gas analysis for mineral oil based transformer oils and various models for predicting the fault and analysis, oil specifications and standards have also been modified to include stray gassing limits which cover the low temperature faults and becomes an effective preventative maintenance tool that can benefit greatly to know the reasons for the breakdown of electrical insulating materials and related components. Natural esters have seen a rise in popularity in recent years due to their "green" credentials. Some of its benefits include biodegradability, a higher fire point, improvement in load capability of transformer and improved solid insulation life than mineral oils. However, the Stray gases evolution like hydrogen and hydrocarbons like methane (CH4) and ethane (C2H6) show very high values which are much higher than the limits of mineral oil standards. Though the standards for these type esters are yet to be evolved, the higher values of hydrocarbon gases that are available in the market is of concern which might be interpreted as a fault in transformer operation. The current paper focuses on developing a natural ester based transformer oil which shows very levels of stray gassing by standard test methods show much lower values compared to the products available currently and experimental results on various test conditions and the underlying mechanism explained.Keywords: biodegadability, fire point, dissolved gassing analysis, stray gassing
Procedia PDF Downloads 96520 Development of Academic Software for Medial Axis Determination of Porous Media from High-Resolution X-Ray Microtomography Data
Authors: S. Jurado, E. Pazmino
Abstract:
Determination of the medial axis of a porous media sample is a non-trivial problem of interest for several disciplines, e.g., hydrology, fluid dynamics, contaminant transport, filtration, oil extraction, etc. However, the computational tools available for researchers are limited and restricted. The primary aim of this work was to develop a series of algorithms to extract porosity, medial axis structure, and pore-throat size distributions from porous media domains. A complementary objective was to provide the algorithms as free computational software available to the academic community comprising researchers and students interested in 3D data processing. The burn algorithm was tested on porous media data obtained from High-Resolution X-Ray Microtomography (HRXMT) and idealized computer-generated domains. The real data and idealized domains were discretized in voxels domains of 550³ elements and binarized to denote solid and void regions to determine porosity. Subsequently, the algorithm identifies the layer of void voxels next to the solid boundaries. An iterative process removes or 'burns' void voxels in sequence of layer by layer until all the void space is characterized. Multiples strategies were tested to optimize the execution time and use of computer memory, i.e., segmentation of the overall domain in subdomains, vectorization of operations, and extraction of single burn layer data during the iterative process. The medial axis determination was conducted identifying regions where burnt layers collide. The final medial axis structure was refined to avoid concave-grain effects and utilized to determine the pore throat size distribution. A graphic user interface software was developed to encompass all these algorithms, including the generation of idealized porous media domains. The software allows input of HRXMT data to calculate porosity, medial axis, and pore-throat size distribution and provide output in tabular and graphical formats. Preliminary tests of the software developed during this study achieved medial axis, pore-throat size distribution and porosity determination of 100³, 320³ and 550³ voxel porous media domains in 2, 22, and 45 minutes, respectively in a personal computer (Intel i7 processor, 16Gb RAM). These results indicate that the software is a practical and accessible tool in postprocessing HRXMT data for the academic community.Keywords: medial axis, pore-throat distribution, porosity, porous media
Procedia PDF Downloads 115519 The Roman Fora in North Africa Towards a Supportive Protocol to the Decision for the Morphological Restitution
Authors: Dhouha Laribi Galalou, Najla Allani Bouhoula, Atef Hammouda
Abstract:
This research delves into the fundamental question of the morphological restitution of built archaeology in order to place it in its paradigmatic context and to seek answers to it. Indeed, the understanding of the object of the study, its analysis, and the methodology of solving the morphological problem posed, are manageable aspects only by means of a thoughtful strategy that draws on well-defined epistemological scaffolding. In this stream, the crisis of natural reasoning in archaeology has generated multiple changes in this field, ranging from the use of new tools to the integration of an archaeological information system where urbanization involves the interplay of several disciplines. The built archaeological topic is also an architectural and morphological object. It is also a set of articulated elementary data, the understanding of which is about to be approached from a logicist point of view. Morphological restitution is no exception to the rule, and the inter-exchange between the different disciplines uses the capacity of each to frame the reflection on the incomplete elements of a given architecture or on its different phases and multiple states of existence. The logicist sequence is furnished by the set of scattered or destroyed elements found, but also by what can be called a rule base which contains the set of rules for the architectural construction of the object. The knowledge base built from the archaeological literature also provides a reference that enters into the game of searching for forms and articulations. The choice of the Roman Forum in North Africa is justified by the great urban and architectural characteristics of this entity. The research on the forum involves both a fairly large knowledge base but also provides the researcher with material to study - from a morphological and architectural point of view - starting from the scale of the city down to the architectural detail. The experimentation of the knowledge deduced on the paradigmatic level, as well as the deduction of an analysis model, is then carried out on the basis of a well-defined context which contextualises the experimentation from the elaboration of the morphological information container attached to the rule base and the knowledge base. The use of logicist analysis and artificial intelligence has allowed us to first question the aspects already known in order to measure the credibility of our system, which remains above all a decision support tool for the morphological restitution of Roman Fora in North Africa. This paper presents a first experimentation of the model elaborated during this research, a model framed by a paradigmatic discussion and thus trying to position the research in relation to the existing paradigmatic and experimental knowledge on the issue.Keywords: classical reasoning, logicist reasoning, archaeology, architecture, roman forum, morphology, calculation
Procedia PDF Downloads 147518 Systematic Identification of Noncoding Cancer Driver Somatic Mutations
Authors: Zohar Manber, Ran Elkon
Abstract:
Accumulation of somatic mutations (SMs) in the genome is a major driving force of cancer development. Most SMs in the tumor's genome are functionally neutral; however, some cause damage to critical processes and provide the tumor with a selective growth advantage (termed cancer driver mutations). Current research on functional significance of SMs is mainly focused on finding alterations in protein coding sequences. However, the exome comprises only 3% of the human genome, and thus, SMs in the noncoding genome significantly outnumber those that map to protein-coding regions. Although our understanding of noncoding driver SMs is very rudimentary, it is likely that disruption of regulatory elements in the genome is an important, yet largely underexplored mechanism by which somatic mutations contribute to cancer development. The expression of most human genes is controlled by multiple enhancers, and therefore, it is conceivable that regulatory SMs are distributed across different enhancers of the same target gene. Yet, to date, most statistical searches for regulatory SMs have considered each regulatory element individually, which may reduce statistical power. The first challenge in considering the cumulative activity of all the enhancers of a gene as a single unit is to map enhancers to their target promoters. Such mapping defines for each gene its set of regulating enhancers (termed "set of regulatory elements" (SRE)). Considering multiple enhancers of each gene as one unit holds great promise for enhancing the identification of driver regulatory SMs. However, the success of this approach is greatly dependent on the availability of comprehensive and accurate enhancer-promoter (E-P) maps. To date, the discovery of driver regulatory SMs has been hindered by insufficient sample sizes and statistical analyses that often considered each regulatory element separately. In this study, we analyzed more than 2,500 whole-genome sequence (WGS) samples provided by The Cancer Genome Atlas (TCGA) and The International Cancer Genome Consortium (ICGC) in order to identify such driver regulatory SMs. Our analyses took into account the combinatorial aspect of gene regulation by considering all the enhancers that control the same target gene as one unit, based on E-P maps from three genomics resources. The identification of candidate driver noncoding SMs is based on their recurrence. We searched for SREs of genes that are "hotspots" for SMs (that is, they accumulate SMs at a significantly elevated rate). To test the statistical significance of recurrence of SMs within a gene's SRE, we used both global and local background mutation rates. Using this approach, we detected - in seven different cancer types - numerous "hotspots" for SMs. To support the functional significance of these recurrent noncoding SMs, we further examined their association with the expression level of their target gene (using gene expression data provided by the ICGC and TCGA for samples that were also analyzed by WGS).Keywords: cancer genomics, enhancers, noncoding genome, regulatory elements
Procedia PDF Downloads 104517 Folding of β-Structures via the Polarized Structure-Specific Backbone Charge (PSBC) Model
Authors: Yew Mun Yip, Dawei Zhang
Abstract:
Proteins are the biological machinery that executes specific vital functions in every cell of the human body by folding into their 3D structures. When a protein misfolds from its native structure, the machinery will malfunction and lead to misfolding diseases. Although in vitro experiments are able to conclude that the mutations of the amino acid sequence lead to incorrectly folded protein structures, these experiments are unable to decipher the folding process. Therefore, molecular dynamic (MD) simulations are employed to simulate the folding process so that our improved understanding of the folding process will enable us to contemplate better treatments for misfolding diseases. MD simulations make use of force fields to simulate the folding process of peptides. Secondary structures are formed via the hydrogen bonds formed between the backbone atoms (C, O, N, H). It is important that the hydrogen bond energy computed during the MD simulation is accurate in order to direct the folding process to the native structure. Since the atoms involved in a hydrogen bond possess very dissimilar electronegativities, the more electronegative atom will attract greater electron density from the less electronegative atom towards itself. This is known as the polarization effect. Since the polarization effect changes the electron density of the two atoms in close proximity, the atomic charges of the two atoms should also vary based on the strength of the polarization effect. However, the fixed atomic charge scheme in force fields does not account for the polarization effect. In this study, we introduce the polarized structure-specific backbone charge (PSBC) model. The PSBC model accounts for the polarization effect in MD simulation by updating the atomic charges of the backbone hydrogen bond atoms according to equations derived between the amount of charge transferred to the atom and the length of the hydrogen bond, which are calculated from quantum-mechanical calculations. Compared to other polarizable models, the PSBC model does not require quantum-mechanical calculations of the peptide simulated at every time-step of the simulation and maintains the dynamic update of atomic charges, thereby reducing the computational cost and time while accounting for the polarization effect dynamically at the same time. The PSBC model is applied to two different β-peptides, namely the Beta3s/GS peptide, a de novo designed three-stranded β-sheet whose structure is folded in vitro and studied by NMR, and the trpzip peptides, a double-stranded β-sheet where a correlation is found between the type of amino acids that constitute the β-turn and the β-propensity.Keywords: hydrogen bond, polarization effect, protein folding, PSBC
Procedia PDF Downloads 270516 Physicians’ Knowledge and Perception of Gene Profiling in Malaysia: A Pilot Study
Authors: Farahnaz Amini, Woo Yun Kin, Lazwani Kolandaiveloo
Abstract:
Availability of different genetic tests after completion of Human Genome Project increases the physicians’ responsibility to keep themselves update on the potential implementation of these genetic tests in their daily practice. However, due to numbers of barriers, still many of physicians are not either aware of these tests or are not willing to offer or refer their patients for genetic tests. This study was conducted an anonymous, cross-sectional, mailed-based survey to develop a primary data of Malaysian physicians’ level of knowledge and perception of gene profiling. Questionnaire had 29 questions. Total scores on selected questions were used to assess the level of knowledge. The highest possible score was 11. Descriptive statistics, one way ANOVA and chi-squared test was used for statistical analysis. Sixty three completed questionnaires was returned by 27 general practitioners (GPs) and 36 medical specialists. Responders’ age range from 24 to 55 years old (mean 30.2 ± 6.4). About 40% of the participants rated themselves as having poor level of knowledge in genetics in general whilst 60% believed that they have fair level of knowledge. However, almost half (46%) of the respondents felt that they were not knowledgeable about available genetic tests. A majority (94%) of the responders were not aware of any lab or company which is offering gene profiling services in Malaysia. Only 4% of participants were aware of using gene profiling for detection of dosage of some drugs. Respondents perceived greater utility of gene profiling for breast cancer (38%) compared to the colorectal familial cancer (3%). The score of knowledge ranged from 2 to 8 (mean 4.38 ± 1.67). Non-significant differences between score of knowledge of GPs and specialists were observed, with score of 4.19 and 4.58 respectively. There was no significant association between any demographic factors and level of knowledge. However, those who graduated between years 2001 to 2005 had higher level of knowledge. Overall, 83% of participants showed relatively high level of perception on value of gene profiling to detect patient’s risk of disease. However, low perception was observed for both statements of using gene profiling for general population in order to alter their lifestyle (25%) as well as having the full sequence of a patient genome for the purpose of determining a patient’s best match for treatment (18%). The lack of clinical guidelines, limited provider knowledge and awareness, lack of time and resources to educate patients, lack of evidence-based clinical information and cost of tests were the most barriers of ordering gene profiling mentioned by physicians. In conclusion Malaysian physicians who participate in this study had mediocre level of knowledge and awareness in gene profiling. The low exposure to the genetic questions and problems might be a key predictor of lack of awareness and knowledge on available genetic tests. Educational and training workshop might be useful in helping Malaysian physicians incorporate genetic profiling into practice for eligible patients.Keywords: gene profiling, knowledge, Malaysia, physician
Procedia PDF Downloads 326515 Data Refinement Enhances The Accuracy of Short-Term Traffic Latency Prediction
Authors: Man Fung Ho, Lap So, Jiaqi Zhang, Yuheng Zhao, Huiyang Lu, Tat Shing Choi, K. Y. Michael Wong
Abstract:
Nowadays, a tremendous amount of data is available in the transportation system, enabling the development of various machine learning approaches to make short-term latency predictions. A natural question is then the choice of relevant information to enable accurate predictions. Using traffic data collected from the Taiwan Freeway System, we consider the prediction of short-term latency of a freeway segment with a length of 17 km covering 5 measurement points, each collecting vehicle-by-vehicle data through the electronic toll collection system. The processed data include the past latencies of the freeway segment with different time lags, the traffic conditions of the individual segments (the accumulations, the traffic fluxes, the entrance and exit rates), the total accumulations, and the weekday latency profiles obtained by Gaussian process regression of past data. We arrive at several important conclusions about how data should be refined to obtain accurate predictions, which have implications for future system-wide latency predictions. (1) We find that the prediction of median latency is much more accurate and meaningful than the prediction of average latency, as the latter is plagued by outliers. This is verified by machine-learning prediction using XGBoost that yields a 35% improvement in the mean square error of the 5-minute averaged latencies. (2) We find that the median latency of the segment 15 minutes ago is a very good baseline for performance comparison, and we have evidence that further improvement is achieved by machine learning approaches such as XGBoost and Long Short-Term Memory (LSTM). (3) By analyzing the feature importance score in XGBoost and calculating the mutual information between the inputs and the latencies to be predicted, we identify a sequence of inputs ranked in importance. It confirms that the past latencies are most informative of the predicted latencies, followed by the total accumulation, whereas inputs such as the entrance and exit rates are uninformative. It also confirms that the inputs are much less informative of the average latencies than the median latencies. (4) For predicting the latencies of segments composed of two or three sub-segments, summing up the predicted latencies of each sub-segment is more accurate than the one-step prediction of the whole segment, especially with the latency prediction of the downstream sub-segments trained to anticipate latencies several minutes ahead. The duration of the anticipation time is an increasing function of the traveling time of the upstream segment. The above findings have important implications to predicting the full set of latencies among the various locations in the freeway system.Keywords: data refinement, machine learning, mutual information, short-term latency prediction
Procedia PDF Downloads 169514 Time of Death Determination in Medicolegal Death Investigations
Authors: Michelle Rippy
Abstract:
Medicolegal death investigation historically is a field that does not receive much research attention or advancement, as all of the subjects are deceased. Public health threats, drug epidemics and contagious diseases are typically recognized in decedents first, with thorough and accurate death investigations able to assist in epidemiology research and prevention programs. One vital component of medicolegal death investigation is determining the decedent’s time of death. An accurate time of death can assist in corroborating alibies, determining sequence of death in multiple casualty circumstances and provide vital facts in civil situations. Popular television portrays an unrealistic forensic ability to provide the exact time of death to the minute for someone found deceased with no witnesses present. The actuality of unattended decedent time of death determination can generally only be narrowed to a 4-6 hour window. In the mid- to late-20th century, liver temperatures were an invasive action taken by death investigators to determine the decedent’s core temperature. The core temperature was programmed into an equation to determine an approximate time of death. Due to many inconsistencies with the placement of the thermometer and other variables, the accuracy of the liver temperatures was dispelled and this once common place action lost scientific support. Currently, medicolegal death investigators utilize three major after death or post-mortem changes at a death scene. Many factors are considered in the subjective determination as to the time of death, including the cooling of the decedent, stiffness of the muscles, release of blood internally, clothing, ambient temperature, disease and recent exercise. Current research is utilizing non-invasive hospital grade tympanic thermometers to measure the temperature in the each of the decedent’s ears. This tool can be used at the scene and in conjunction with scene indicators may provide a more accurate time of death. The research is significant and important to investigations and can provide an area of accuracy to a historically inaccurate area, considerably improving criminal and civil death investigations. The goal of the research is to provide a scientific basis to unwitnessed deaths, instead of the art that the determination currently is. The research is currently in progress with expected termination in December 2018. There are currently 15 completed case studies with vital information including the ambient temperature, decedent height/weight/sex/age, layers of clothing, found position, if medical intervention occurred and if the death was witnessed. This data will be analyzed with the multiple variables studied and available for presentation in January 2019.Keywords: algor mortis, forensic pathology, investigations, medicolegal, time of death, tympanic
Procedia PDF Downloads 118513 The Effect of Relocating a Red Deer Stag on the Size of Its Home Range and Activity
Authors: Erika Csanyi, Gyula Sandor
Abstract:
In the course of the examination, we sought to answer the question of how and to what extent the home range and daily activity of a deer stag relocated from its habitual surroundings changes. We conducted the examination in two hunting areas in Hungary, about 50 km from one another. The control area was in the north of Somogy County, while the sample area was an area of similar features in terms of forest cover, tree stock, agricultural structure, altitude above sea level, climate, etc. in the south of Somogy County. Three middle-aged red deer stags were captured with rocket nets, immobilized and marked with GPS-Plus Collars manufactured by Vectronic Aerospace Gesellschaft mit beschränkter Haftung. One captured species was relocated. We monitored deer movements over 24-hour periods at 3 months. In the course of the examination, we analysed the behaviour of the relocated species and those that remained in their original habitat, as well as the temporal evolution of their behaviour. We examined the characteristics of the marked species’ daily activities and the hourly distance they covered. We intended to find out the difference between the behaviour of the species remaining in their original habitat and of those relocated to a more distant, but similar habitat. In summary, based on our findings, it can be established that such enforced relocations to a different habitat (e.g., game relocation) significantly increases the home range of the species in the months following relocation. Home ranges were calculated using the full data set and the minimum convex polygon (MCP) method. Relocation did not increase the nocturnal and diurnal movement activity of the animal in question. Our research found that the home range of the relocated species proved to be significantly higher than that of those species that were not relocated. The results have been presented in tabular form and have also been displayed on a map. Based on the results, it can be established that relocation inherently includes the risk of falling victim to poaching, vehicle collision. It was only in the third month following relocation that the home range of the relocated species subsided to the level of those species that were not relocated. It is advisable to take these observations into consideration in relocating red deer for nature conservation or game management purposes.Keywords: Cervus elaphus, home range, relocation, red deer stag
Procedia PDF Downloads 137512 Virtual Approach to Simulating Geotechnical Problems under Both Static and Dynamic Conditions
Authors: Varvara Roubtsova, Mohamed Chekired
Abstract:
Recent studies on the numerical simulation of geotechnical problems show the importance of considering the soil micro-structure. At this scale, soil is a discrete particle medium where the particles can interact with each other and with water flow under external forces, structure loads or natural events. This paper presents research conducted in a virtual laboratory named SiGran, developed at IREQ (Institut de recherche d’Hydro-Quebec) for the purpose of investigating a broad range of problems encountered in geotechnics. Using Discrete Element Method (DEM), SiGran simulated granular materials directly by applying Newton’s laws to each particle. The water flow was simulated by using Marker and Cell method (MAC) to solve the full form of Navier-Stokes’s equation for non-compressible viscous liquid. In this paper, examples of numerical simulation and their comparisons with real experiments have been selected to show the complexity of geotechnical research at the micro level. These examples describe transient flows into a porous medium, interaction of particles in a viscous flow, compacting of saturated and unsaturated soils and the phenomenon of liquefaction under seismic load. They also provide an opportunity to present SiGran’s capacity to compute the distribution and evolution of energy by type (particle kinetic energy, particle internal elastic energy, energy dissipated by friction or as a result of viscous interaction into flow, and so on). This work also includes the first attempts to apply micro discrete results on a macro continuum level where the Smoothed Particle Hydrodynamics (SPH) method was used to resolve the system of governing equations. The material behavior equation is based on the results of simulations carried out at a micro level. The possibility of combining three methods (DEM, MAC and SPH) is discussed.Keywords: discrete element method, marker and cell method, numerical simulation, multi-scale simulations, smoothed particle hydrodynamics
Procedia PDF Downloads 302511 An Assessment of Female Representation in Philippine Cinema in Comparison to American Cinema (1975 to 2020)
Authors: Amanda Julia Binay, Patricia Elise Suarez
Abstract:
Female representation in media is an important subject in the discussion of gender equality, especially in impactful and influential media like film. As the Filipino film industry continues to grow and evolve, the need for analysis on Filipino female representation on screen is imperative. Additionally, there has been limited research made on female representation in the Philippine film scene. Thus, the paper aims to analyze the presence and evolution of female representation in Philippine cinema and compare the findings with that of American films to see how Filipino filmmakers hold their own against the standards of international movements that call for more and better female representation, especially in Hollywood. The participants selected were Filipino and American films released within the years 1975 to 2020 in five (5) year intervals. Twenty (20) critically acclaimed and highest-grossing Filipino films and twenty (20) critically acclaimed and highest-grossing Hollywood films were then subject to the Bechdel and Peirce tests to obtain statistical measures of their female representation. The findings of the study reveal that the presence of female representation in Philippine film history has been consistent and has continued to grow and evolve throughout the years, with strong female leads with vibrant characteristics and diverse stories. However, analysis of female representation regarding American films has shown an extreme lack thereof with more misogynistic, sexist, and limiting ideals. Thus, the study concludes that the state of female representation in Philippine cinema and film industry holds its own when compared to American cinema and film industry and even outperforms it in many aspects of female representation, such as consistent inclusion and depiction of multi-dimensional female leads and female relationships. Hence, the study implies that women’s consistent presence in Philippine cinema mirrors Filipino women’s prominent role in Philippine society and that American cinema must continue to make efforts to change their portrayals of female characters, leads, and relationships to make them more grounded in reality.Keywords: female representation, gender studies, feminism, philippine cinema, American cinema, bechdel test, peirce test, comparative analysis
Procedia PDF Downloads 381510 Study of Water Cluster-Amorphous Silica Collisions in the Extreme Space Environment Using the ReaxFF Reactive Force Field Molecular Dynamics Simulation Method
Authors: Ali Rahnamoun, Adri van Duin
Abstract:
The concept of high velocity particle impact on the spacecraft surface materials has been one of the important issues in the design of such materials. Among these particles, water clusters might be the most abundant and the most important particles to be studied. The importance of water clusters is that upon impact on the surface of the materials, they can cause damage to the material and also if they are sub-cooled water clusters, they can attach to the surface of the materials and cause ice accumulation on the surface which is very problematic in spacecraft and also aircraft operations. The dynamics of the collisions between amorphous silica structures and water clusters with impact velocities of 1 km/s to 10 km/s are studied using the ReaxFF reactive molecular dynamics simulation method. The initial water clusters include 150 water molecules and the water clusters are collided on the surface of amorphous fully oxidized and suboxide silica structures. These simulations show that the most abundant molecules observed on the silica surfaces, other than reflecting water molecules, are H3O+ and OH- for the water cluster impacts on suboxide and fully oxidized silica structures, respectively. The effect of impact velocity on the change of silica mass is studied. At high impact velocities the water molecules attach to the silica surface through a chemisorption process meaning that water molecule dissociates through the interaction with silica surface. However, at low impact velocities, physisorbed water molecules are also observed, which means water molecule attaches and accumulates on the silica surface. The amount of physisorbed waters molecules at low velocities is higher on the suboxide silica surfaces. The evolution of the temperatures of the water clusters during the collisions indicates that the possibility of electron excitement at impact velocities less than 10 km/s is minimal and ReaxFF reactive molecular dynamics simulation can predict the chemistry of these hypervelocity impacts. However, at impact velocities close to 10 km/s the average temperature of the impacting water clusters increase to about 2000K, with individual molecules oocasionally reaching temperatures of over 8000K and thus will be prudent to consider the concept of electron excitation at these higher impact velocities which goes beyond the current ReaxFF ability.Keywords: spacecraft materials, hypervelocity impact, reactive molecular dynamics simulation, amorphous silica
Procedia PDF Downloads 418509 Characterization of Transmembrane Proteins with Five Alpha-Helical Regions
Authors: Misty Attwood, Helgi Schioth
Abstract:
Transmembrane proteins are important components in many essential cell processes such as signal transduction, cell-cell signalling, transport of solutes, structural adhesion activities, and protein trafficking. Due to their involvement in diverse critical activities, transmembrane proteins are implicated in different disease pathways and hence are the focus of intense interest in understanding functional activities, their pathogenesis in disease, and their potential as pharmaceutical targets. Further, as the structure and function of proteins are correlated, investigating a group of proteins with the same tertiary structure, i.e., the same number of transmembrane regions, may give understanding about their functional roles and potential as therapeutic targets. In this in silico bioinformatics analysis, we identify and comprehensively characterize the previously unstudied group of proteins with five transmembrane-spanning regions (5TM). We classify nearly 60 5TM proteins in which 31 are members of ten families that contain two or more family members and all members are predicted to contain the 5TM architecture. Furthermore, nine singlet proteins that contain the 5TM architecture without paralogues detected in humans were also identifying, indicating the evolution of single unique proteins with the 5TM structure. Interestingly, more than half of these proteins function in localization activities through movement or tethering of cell components and more than one-third are involved in transport activities, particularly in the mitochondria. Surprisingly, no receptor activity was identified within this family in sharp contrast with other TM families. Three major 5TM families were identified and include the Tweety family, which are pore-forming subunits of the swelling-dependent volume regulated anion channel in astrocytes; the sidoreflexin family that acts as mitochondrial amino acid transporters; and the Yip1 domain family engaged in vesicle budding and intra-Golgi transport. About 30% of the proteins have enhanced expression in the brain, liver, or testis. Importantly, 60% of these proteins are identified as cancer prognostic markers, where they are associated with clinical outcomes of various tumour types, indicating further investigation into the function and expression of these proteins is important. This study provides the first comprehensive analysis of proteins with 5TM regions and provides details of the unique characteristics and application in pharmaceutical development.Keywords: 5TM, cancer prognostic marker, drug targets, transmembrane protein
Procedia PDF Downloads 109508 Value Chain with the Participation of Urban Agriculture Development by Social Enterprises
Authors: Kuo-Wei Hsu, Wei-Chin Lo
Abstract:
In these years, urban agriculture development has been wide spreading all over the world. The development of urban agriculture is an evolution process of highly urbanization, as well as an agricultural phenomenon closely related to the development of economy, society and culture in urban areas. It provides densely populated areas with multi-functional uses of land, impacting strategic development of both large and small towns in the area. In addition, the participation of social enterprises keeps industrial competitiveness and makes gains when facing rapid transformation of industrial structures and new patterns of lifestyles in urban areas. They create better living conditions as well as protect the environment with innovative business beliefs, which give new ways for development of urban agriculture. Also, through building up the value chain, these social enterprises are capable of creating value for urban agriculture. Most of research regarding to social enterprises currently explore the relationship between corporate responsibilities and its role play, operational mode and performance and organizational patterns. Merely some of them discuss the function of social entrepreneurship in the development of urban agriculture. Moreover, none of them have explored the value creation for development of urban agriculture processed by social enterprises, as well as how social enterprises operate to increase competitive advantages, which make it possible to achieve industrial innovation, increase corporate value and even provide services with value creation. Therefore, this research mainly reviews current business patterns and operational conditions of social enterprises. This research endowed social responsibilities, and discusses current development process of urban agriculture. This research adopts Value Chain perspective to discuss key factors for value creation with respect to the development of urban agriculture processed by social enterprises. Thereby after organization and integration this research develops the prospect of value creation referring to urban agriculture processed by social enterprises and builds the value chain for urban agriculture. In conclusion, this research explored the relationship between value chain and value creation, which relates to values of customer, enterprise, society and economy referring to the development of urban agriculture uniquely, in consideration of the participation of social enterprises, and hence built the connection between value chain and value creation in the development of urban agriculture by social enterprises. The research found, social enterprises help to enhance the connection between the enterprise value and society value, mold corporate image with social responsibility and create brand value, and therefore impact the increase of economic value.Keywords: urban agriculture development, value chain, social enterprise, urban systems
Procedia PDF Downloads 481507 The Ethical Influence in the Political Configuration of Society: An Articulation between Phanomenologie Des Geistes and the Grundlinien Der Philosophie Des Rechts
Authors: Joao Gouveia
Abstract:
This is a study about Hegelian political and moral philosophy. Our aim is to understand the relevance that Hegel attributes to ethics in the concrete political configuration of society. But our analysis isn’t limited to Hegel’s most known political work (the Grundlinien der Philosophie des Rechts). Instead, we also analyze the Phänomenologie des Geistes and establish a comparison between them. In the Moralität of the Grundlinien der Philosophie des Rechts, consciousness acquires the disposition that allows it to see any determination as its own (the certainty about itself or Gewissen). This certainty is the essential disposition that makes itself felt throughout all Sittlichkeit –the dispositions of family member and citizen (Bürger) are only configurations of it. Although consciousness is alienated in these dispositions, it doesn’t lose the certainty about itself that it reached in the Moralität. As our major finding, we point out that it is the moral learning that allows consciousness to resist the temptation of focusing so intensely on specific content that it excludes all the others (a temptation that is stimulated by the very intensity with which each content presents itself to consciousness). As the world of Bildung of the Phänomenologie des Geistes isn’t preceded by a sphere of Moralität, consciousness is thrown into a frenzy of destruction of all the powers of objectivity, and it ends up having to withdraw from the concrete contents and to focus in an abstract whole, where it doesn’t find opposite determinacies. The evidence supporting our thesis is the fact that the transition from abstraction into particularity, that we see in the Grundlinien der Philosophie des Rechts, allows the preservation of abstraction (it isn’t lost as we penetrate in particularity). On the other hand, the transition we find in the Phänomenologie des Geistes is a transition from particularity to abstraction, which takes every particularity to be eliminated in the war with others. While in the Phänomenologie des Geistes, the state may only be seen as a moment or facet of the object (it is only Staatsmacht); in the Grundlinien der Philosophie des Rechts, it is seen as a whole that contains various moments in itself (Staat). Therefore, the element of the Phänomenologie des Geistes that is closer to the State of the Grundlinien der Philosophie des Rechts is language (or the language of perversion) –something that can’t be defined as an individuality. This way, we want to show that, between the Phänomenologie des Geistes and the Grundlinien der Philosophie des Rechts, there is truly no remarkable evolution to report in Hegel’s ethical thought. What the difference in the structure of the two works show is a specific thesis respecting the influence of ethics in the configuration of society, and this thesis has implications at various levels, including in the philosophy of history.Keywords: Grundlinien der Philosophie des Rechts, Hegelian ethics, Hegelian politics, Phänomenologie des Geistes
Procedia PDF Downloads 97506 Optimized Parameters for Simultaneous Detection of Cd²⁺, Pb²⁺ and CO²⁺ Ions in Water Using Square Wave Voltammetry on the Unmodified Glassy Carbon Electrode
Authors: K. Sruthi, Sai Snehitha Yadavalli, Swathi Gosh Acharyya
Abstract:
Water is the most crucial element for sustaining life on earth. Increasing water pollution directly or indirectly leads to harmful effects on human life. Most of the heavy metal ions are harmful in their cationic form. These heavy metal ions are released by various activities like disposing of batteries, industrial wastes, automobile emissions, and soil contamination. Ions like (Pb, Co, Cd) are carcinogenic and show many harmful effects when consumed more than certain limits proposed by WHO. The simultaneous detection of the heavy metal ions (Pb, Co, Cd), which are highly toxic, is reported in this study. There are many analytical methods for quantifying, but electrochemical techniques are given high priority because of their sensitivity and ability to detect and recognize lower concentrations. Square wave voltammetry was preferred in electrochemical methods due to the absence of background currents which is interference. Square wave voltammetry was performed on GCE for the quantitative detection of ions. Three electrode system consisting of a glassy carbon electrode as the working electrode (3 mm diameter), Ag/Agcl electrode as the reference electrode, and a platinum wire as the counter electrode was chosen for experimentation. The mechanism of detection was done by optimizing the experimental parameters, namely pH, scan rate, and temperature. Under the optimized conditions, square wave voltammetry was performed for simultaneous detection. Scan rates were varied from 5 mV/s to 100 mV/s and found that at 25 mV/s all the three ions were detected simultaneously with proper peaks at particular stripping potential. The variation of pH from 3 to 8 was done where the optimized pH was taken as pH 5 which holds good for three ions. There was a decreasing trend at starting because of hydrogen gas evolution, and after pH 5 again there was a decreasing trend that is because of hydroxide formation on the surface of the working electrode (GCE). The temperature variation from 25˚C to 45˚C was done where the optimum temperature concerning three ions was taken as 35˚C. Deposition and stripping potentials were given as +1.5 V and -1.5 V, and the resting time of 150 seconds was given. Three ions were detected at stripping potentials of Cd²⁺ at -0.84 V, Pb²⁺ at -0.54 V, and Co²⁺ at -0.44 V. The parameters of detection were optimized on a glassy carbon electrode for simultaneous detection of the ions at lower concentrations by square wave voltammetry.Keywords: cadmium, cobalt, lead, glassy carbon electrode, square wave anodic stripping voltammetry
Procedia PDF Downloads 117505 Developing Communicative Skills in Foreign Languages by Video Tasks
Authors: Ekaterina G. Lipatova
Abstract:
The developing potential of a video task in teaching foreign languages involves the opportunities to improve four aspects of speech production process: listening, reading, speaking and writing. A video represents the sequence of actions, realized in the pictures logically connected and verbalized speech flow that simplifies and stimulates the process of perception. In this connection listening skills of students are developed effectively as well as their intellectual properties such as synthesizing, analyzing and generalizing the information. In terms of teaching capacity, a video task, in our opinion, is more stimulating than a traditional listening, since it involves the student into the plot of the communicative situation, emotional background and potentially makes them react to the gist in the cognitive and communicative ways. To be an effective method of teaching the video task should be structured in the way of psycho-linguistic characteristics of speech production process, in other words, should include three phases: before-watching, while-watching and after-watching. The system of tasks provided to each phase might involve the situations on reflecting to the video content in the forms of filling-the-gap tasks, multiple choice, True-or-False tasks (reading skills), exercises on expressing the opinion, project fulfilling (writing and speaking skills). In the before-watching phase we offer the students to adjust their perception mechanism to the topic and the problem of the chosen video by such task as “what do you know about such a problem?”, “is it new for you?”, “have you ever faced the situation of…?”. Then we proceed with the lexical and grammatical analysis of language units that form the body of a speech sample to lessen the perception and develop the student’s lexicon. The goal of while-watching phase is to build the student’s awareness about the problem presented in the video and challenge their inner attitude towards what they have seen by identifying the mistakes in the statements about the video content or making the summary, justifying their understanding. Finally, we move on to development of their speech skills within the communicative situation they observed and learnt by stimulating them to search the similar ideas in their backgrounds and represent them orally or in the written form or express their own opinion on the problem. It is compulsory to highlight, that a video task should contain the urgent, valid and interesting event related to the future profession of the student, since it will help to activate cognitive, emotional, verbal and ethic capacity of students. Also, logically structured video tasks are easily integrated into the system of e-learning and can provide the opportunity for the students to work with the foreign language on their own.Keywords: communicative situation, perception mechanism, speech production process, speech skills
Procedia PDF Downloads 245504 Evolution of Nettlespurge Oil Mud for Drilling Mud System: A Comparative Study of Diesel Oil and Nettlespurge Oil as Oil-Based Drilling Mud
Authors: Harsh Agarwal, Pratikkumar Patel, Maharshi Pathak
Abstract:
Recently the low prices of Crude oil and increase in strict environmental regulations limit limits the use of diesel based muds as these muds are relatively costlier and toxic, as a result disposal of cuttings into the eco-system is a major issue faced by the drilling industries. To overcome these issues faced by the Oil Industry, an attempt has been made to develop oil-in-water emulsion mud system using nettlespurge oil. Nettlespurge oil could be easily available and its cost is around ₹30/litre which is about half the price of diesel in India. Oil-based mud (OBM) was formulated with Nettlespurge oil extracted from Nettlespurge seeds using the Soxhlet extraction method. The formulated nettlespurge oil mud properties were analysed with diesel oil mud properties. The compared properties were rheological properties, yield point and gel strength, and mud density and filtration loss properties, fluid loss and filter cake. The mud density measurement showed that nettlespurge OBM was slightly higher than diesel OBM with mud density values of 9.175 lb/gal and 8.5 lb/gal, respectively, at barite content of 70 g. Thus it has a higher lubricating property. Additionally, the filtration loss test results showed that nettlespurge mud fluid loss volumes, oil was 11 ml, compared to diesel oil mud volume of 15 ml. The filtration loss test indicated that the nettlespurge oil mud with filter cake thickness of 2.2 mm had a cake characteristic of thin and squashy while the diesel oil mud resulted in filter cake thickness of 2.7 mm with cake characteristic of tenacious, rubbery and resilient. The filtration loss test results showed that nettlespurge oil mud fluid loss volumes was much less than the diesel based oil mud. The filtration loss test indicated that the nettlespurge oil mud filter cake thickness less than the diesel oil mud filter cake thickness. So Low formation damage and the emulsion stability effect was analysed with this experiment. The nettlespurge oil-in-water mud system had lower coefficient of friction than the diesel oil based mud system. All the rheological properties have shown better results relative to the diesel based oil mud. Therefore, with all the above mentioned factors and with the data of the conducted experiment we could conclude that the Nettlespurge oil based mud is economically and well as eco-logically much more feasible than the worn out and shabby diesel-based oil mud in the Drilling Industry.Keywords: economical feasible, ecological feasible, emulsion stability, nettle spurge oil, rheological properties, soxhlet extraction method
Procedia PDF Downloads 203503 Medicinal Plants: An Antiviral Depository with Complex Mode of Action
Authors: Daniel Todorov, Anton Hinkov, Petya Angelova, Kalina Shishkova, Venelin Tsvetkov, Stoyan Shishkov
Abstract:
Human herpes viruses (HHV) are ubiquitous pathogens with a pandemic spread across the globe. HHV type 1 is the main causative agent of cold sores and fever blisters around the mouth and on the face, whereas HHV type 2 is generally responsible for genital herpes outbreaks. The treatment of both viruses is more or less successful with antivirals from the nucleoside analogues group. Their wide application increasingly leads to the emergence of resistant mutants In the past, medicinal plants have been used to treat a number of infectious and non-infectious diseases. Their diversity and ability to produce the vast variety of secondary metabolites according to the characteristics of the environment give them the potential to help us in our warfare with viral infections. The variable chemical characteristics and complex composition is an advantage in the treatment of herpes since the emergence of resistant mutants is significantly complicated. The screening process is difficult due to the lack of standardization. That is why it is especially important to follow the mechanism of antiviral action of plants. On the one hand, it may be expected to interact with its compounds, resulting in enhanced antiviral effects, and the most appropriate environmental conditions can be chosen to maximize the amount of active secondary metabolites. During our study, we followed the activity of various plant extracts on the viral replication cycle as well as their effect on the extracellular virion. We obtained our results following the logical sequence of the experimental settings - determining the cytotoxicity of the extracts, evaluating the overall effect on viral replication and extracellular virion.During our research, we have screened a variety of plant extracts for their antiviral activity against both virus replication and the virion itself. We investigated the effect of the extracts on the individual stages of the viral replication cycle - viral adsorption, penetration and the effect on replication depending on the time of addition. If there are positive results in the later experiments, we had studied the activity over viral adsorption, penetration and the effect of replication according to the time of addition. Our results indicate that some of the extracts from the Lamium album have several targets. The first stages of the viral life cycle are most affected. Several of our active antiviral agents have shown an effect on extracellular virion and adsorption and penetration processes. Our research over the last decade has shown several curative antiviral plants - some of which are from the Lamiacea family. The rich set of active ingredients of the plants in this family makes them a good source of antiviral preparation.Keywords: human herpes virus, antiviral activity, Lamium album, Nepeta nuda
Procedia PDF Downloads 154502 In Search of Commonalities in the Determinants of Child Sex Ratios in India and People's of Republic of China
Authors: Suddhasil Siddhanta, Debasish Nandy
Abstract:
Child sex ratios pattern in the Asian Population is highly masculine mainly due to birth masculinity and gender bias in child mortality. The vast and the growing literature of female deficit in world population points out the diffusion of child sex ratio pattern in many Asian as well as neighboring European countries. However, little attention has been given to understand the common factors in different demographics in explaining child sex ratio pattern. Such a scholarship is extremely important as level of gender inequity is different in different country set up. Our paper tries to explain the major structural commonalities in the child masculinity pattern in two demographic billionaires - India and China. The analysis reveals that apart from geographical diffusion of sex selection technology, patrilocal social structure, as proxied by households with more than one generation in China and proportion of population aged 65 years and above in India, can explain significant variation of missing girl child in these two countries. Even after controlling for individual capacity building factors like educational attainment, or work force participation, the measure of social stratification is coming out to be the major determinant of child sex ratio variation. Other socio economic factors that perform much well are the agency building factors of the females, like changing pattern of marriage customs which is proxied by divorce and remarriage ratio for china and percentage of female marrying at or after the age of 20 years in India and the female workforce participation. Proportion of minorities in socio-religious composition of the population and gender bias in scholastic attainment in both these counties are also found to be significant in modeling child sex ratio variations. All these significant common factors associated with child sex ratio point toward the one single most important factor: the historical evolution of patriarchy and its contemporary perpetuation in both the countries. It seems that prohibition of sex selection might not be sufficient to combat the peculiar skewness of excessive maleness in child population in both these countries. Demand sided policies is therefore utmost important to root out the gender bias in child sex ratios.Keywords: child sex ratios, gender bias, structural factors, prosperity, patrilocality
Procedia PDF Downloads 157501 Hydro-Meteorological Vulnerability and Planning in Urban Area: The Case of Yaoundé City in Cameroon
Authors: Ouabo Emmanuel Romaric, Amougou Armathe
Abstract:
Background and aim: The study of impacts of floods and landslides at a small scale, specifically in the urban areas of developing countries is done to provide tools and actors for a better management of risks in such areas, which are now being affected by climate change. The main objective of this study is to assess the hydrometeorological vulnerabilities associated with flooding and urban landslides to propose adaptation measures. Methods: Climatic data analyses were done by calculation of indices of climate change within 50 years (1960-2012). Analyses of field data to determine causes, the level of risk and its consequences on the area of study was carried out using SPSS 18 software. The cartographic analysis and GIS were used to refine the work in space. Then, spatial and terrain analyses were carried out to determine the morphology of field in relation with floods and landslide, and the diffusion on the field. Results: The interannual changes in precipitation has highlighted the surplus years (21), the deficit years (24) and normal years (7). Barakat method bring out evolution of precipitation by jerks and jumps. Floods and landslides are correlated to high precipitation during surplus and normal years. Data field analyses show that populations are conscious (78%) of the risks with 74% of them exposed, but their capacities of adaptation is very low (51%). Floods are the main risk. The soils are classed as feralitic (80%), hydromorphic (15%) and raw mineral (5%). Slope variation (5% to 15%) of small hills and deep valley with anarchic construction favor flood and landslide during heavy precipitation. Mismanagement of waste produce blocks free circulation of river and accentuate floods. Conclusion: Vulnerability of population to hydrometeorological risks in Yaoundé VI is the combination of variation of parameters like precipitation, temperature due to climate change, and the bad planning of construction in urban areas. Because of lack of channels for water to circulate due to saturation of soils, the increase of heavy precipitation and mismanagement of waste, the result are floods and landslides which causes many damages on goods and people.Keywords: climate change, floods, hydrometeorological, vulnerability
Procedia PDF Downloads 466