Search results for: mutant sets
1140 Orbit Determination from Two Position Vectors Using Finite Difference Method
Authors: Akhilesh Kumar, Sathyanarayan G., Nirmala S.
Abstract:
An unusual approach is developed to determine the orbit of satellites/space objects. The determination of orbits is considered a boundary value problem and has been solved using the finite difference method (FDM). Only positions of the satellites/space objects are known at two end times taken as boundary conditions. The technique of finite difference has been used to calculate the orbit between end times. In this approach, the governing equation is defined as the satellite's equation of motion with a perturbed acceleration. Using the finite difference method, the governing equations and boundary conditions are discretized. The resulting system of algebraic equations is solved using Tri Diagonal Matrix Algorithm (TDMA) until convergence is achieved. This methodology test and evaluation has been done using all GPS satellite orbits from National Geospatial-Intelligence Agency (NGA) precise product for Doy 125, 2023. Towards this, two hours of twelve sets have been taken into consideration. Only positions at the end times of each twelve sets are considered boundary conditions. This algorithm is applied to all GPS satellites. Results achieved using FDM compared with the results of NGA precise orbits. The maximum RSS error for the position is 0.48 [m] and the velocity is 0.43 [mm/sec]. Also, the present algorithm is applied on the IRNSS satellites for Doy 220, 2023. The maximum RSS error for the position is 0.49 [m], and for velocity is 0.28 [mm/sec]. Next, a simulation has been done for a Highly Elliptical orbit for DOY 63, 2023, for the duration of 6 hours. The RSS of difference in position is 0.92 [m] and velocity is 1.58 [mm/sec] for the orbital speed of more than 5km/sec. Whereas the RSS of difference in position is 0.13 [m] and velocity is 0.12 [mm/sec] for the orbital speed less than 5km/sec. Results show that the newly created method is reliable and accurate. Further applications of the developed methodology include missile and spacecraft targeting, orbit design (mission planning), space rendezvous and interception, space debris correlation, and navigation solutions.Keywords: finite difference method, grid generation, NavIC system, orbit perturbation
Procedia PDF Downloads 841139 Folding Pathway and Thermodynamic Stability of Monomeric GroEL
Authors: Sarita Puri, Tapan K. Chaudhuri
Abstract:
Chaperonin GroEL is a tetradecameric Escherichia coli protein having identical subunits of 57 kDa. The elucidation of thermodynamic parameters related to stability for the native GroEL is not feasible as it undergoes irreversible unfolding because of its large size (800kDa) and multimeric nature. Nevertheless, it is important to determine the thermodynamic stability parameters for the highly stable GroEL protein as it helps in folding and holding of many substrate proteins during many cellular stresses. Properly folded monomers work as building-block for the formation of native tetradecameric GroEL. Spontaneous refolding behavior of monomeric GroEL makes it suitable for protein-denaturant interactions and thermodynamic stability based studies. The urea mediated unfolding is a three state process which means there is the formation of one intermediate state along with native and unfolded states. The heat mediated denaturation is a two-state process. The unfolding process is reversible as observed by the spontaneous refolding of denatured protein in both urea and head mediated refolding processes. Analysis of folding/unfolding data provides a measure of various thermodynamic stability parameters for the monomeric GroEL. The proposed mechanism of unfolding of monomeric GroEL is a three state process which involves formation of one stable intermediate having folded apical domain and unfolded equatorial, intermediate domains. Research in progress is to demonstrate the importance of specific residues in stability and oligomerization of GroEL protein. Several mutant versions of GroEL are under investigation to resolve the above mentioned issue.Keywords: equilibrium unfolding, monomeric GroEl, spontaneous refolding, thermodynamic stability
Procedia PDF Downloads 2821138 Fuzzy Control and Pertinence Functions
Authors: Luiz F. J. Maia
Abstract:
This paper presents an approach to fuzzy control, with the use of new pertinence functions, applied in the case of an inverted pendulum. Appropriate definitions of pertinence functions to fuzzy sets make possible the implementation of the controller with only one control rule, resulting in a smooth control surface. The fuzzy control system can be implemented with analog devices, affording a true real-time performance.Keywords: control surface, fuzzy control, Inverted pendulum, pertinence functions
Procedia PDF Downloads 4491137 Structural Properties, Natural Bond Orbital, Theory Functional Calculations (DFT), and Energies for Fluorous Compounds: C13H12F7ClN2O
Authors: Shahriar Ghammamy, Masomeh Shahsavary
Abstract:
In this paper, the optimized geometries and frequencies of the stationary point and the minimum energy paths of C13H12F7ClN2O are calculated by using the DFT (B3LYP) methods with LANL2DZ basis sets. B3LYP/ LANL2DZ calculation results indicated that some selected bond length and bond angles values for the C13H12F7ClN2O.Keywords: C13H12F7ClN2O, vatural bond orbital, fluorous compounds, functional calculations
Procedia PDF Downloads 3361136 The Effects of Varying Nutrient Conditions on Hydrogen Production in PGR5 Deficient C. Reinhardtii Mutants
Authors: Samuel Mejorado
Abstract:
C. Reinahrdtii serves as one of the most promising organisms from which to obtain biological hydrogen. However, its production catalyst, [FeFe]-hydrogenase, is largely inhibited by the presence of oxygen. In recent years, researchers have identified a Proton Gradient Regulation 5 (PGR5) deficient mutant, which shows enhanced respiration and lower accumulations of oxygen within the system. In this research, we investigated the effects of varying nutrient conditions on PGR5 mutants' ability to produce hydrogen. After growing PGR5 mutants in varying nutrient conditions under 55W fluorescent lamps at 30℃ with constant stirring at 200 rpm, a common water displacement method was utilized to obtain a definitive volumetric reading of hydrogen produced by these mutants over a period of 12 days. After the trials, statistical t-tests and ANOVAs were performed to better determine the effect which nutrient conditions have on PGR5 mutants' ability to produce hydrogen. In this, we report that conditions of sulfur deprivation most optimally enhanced hydrogen production within these mutants, with groups grown under these conditions demonstrating the highest production capacity over the entire 12-day period. Similarly, it was found that when grown under conditions of nitrogen deprivation, a favorable shift towards carbon fixation and overall lipid/starch metabolism was observed. Overall, these results demonstrate that PGR5-deficient mutants stand as a promising source of biohydrogen when grown under conditions of sulfur deprivation. To date, photochemical characteristics of [FeFe]-hydrogenase in these mutants have yet to be investigated under conditions of sulfur deprivation.Keywords: biofuel, biohydrogen, [FeFe]-hydrogenase, algal biofuel
Procedia PDF Downloads 1431135 Experimental and Theoretical Study on Flexural Behaviors of Reinforced Concrete Cement (RCC) Beams by Using Carbonfiber Reinforcedpolymer (CFRP) Laminate as Retrofitting and Rehabilitation Method
Authors: Fils Olivier Kamanzi
Abstract:
This research Paper shows that materials CFRP were used to rehabilitate 9 Beams and retrofitting of 9 Beams with size (125x250x2300) mm each for M50 grade of concrete with 20% of Volume of Cement replaced by GGBS as a mineral Admixture. Superplasticizer (ForscoConplast SP430) used to reduce the water-cement ratio and maintaining good workability of fresh concrete (Slump test 57mm). Concrete Mix ratio 1:1.56:2.66 with a water-cement ratio of 0.31(ACI codebooks). A sample of 6cubes sized (150X150X150) mm, 6cylinders sized (150ФX300H) mm and 6Prisms sized (100X100X500) mm were cast, cured, and tested for 7,14&28days by compressive, tensile and flexure test; finally, mix design reaches the compressive strength of 59.84N/mm2. 21 Beams were cast and cured for up to 28 days, 3Beams were tested by a two-point loading machine as Control beams. 9 Beams were distressed in flexure by adopting failure up to final Yielding point under two-point loading conditions by taking 90% off Ultimate load. Three sets, each composed of three distressed beams, were rehabilitated by using CFRP sheets, one, two & three layers, respectively, and after being retested up to failure mode. Another three sets were freshly retrofitted also by using CFRP sheets one, two & three layers, respectively, and being tested by a two-point load method of compression strength testing machine. The aim of this study is to determine the flexural Strength & behaviors of repaired and retrofitted Beams by CFRP sheets for gaining good strength and considering economic aspects. The results show that rehabilitated beams increase its strength 47 %, 78 % & 89 %, respectively, to thickness of CFRP sheets and 41%, 51 %& 68 %, respectively too, for retrofitted Beams. The conclusion is that three layers of CFRP sheets are the best applicable in repairing and retrofitting the bonded beams method.Keywords: retrofitting, rehabilitation, cfrp, rcc beam, flexural strength and behaviors, ggbs, and epoxy resin
Procedia PDF Downloads 1081134 Using Machine Learning to Classify Different Body Parts and Determine Healthiness
Authors: Zachary Pan
Abstract:
Our general mission is to solve the problem of classifying images into different body part types and deciding if each of them is healthy or not. However, for now, we will determine healthiness for only one-sixth of the body parts, specifically the chest. We will detect pneumonia in X-ray scans of those chest images. With this type of AI, doctors can use it as a second opinion when they are taking CT or X-ray scans of their patients. Another ad-vantage of using this machine learning classifier is that it has no human weaknesses like fatigue. The overall ap-proach to this problem is to split the problem into two parts: first, classify the image, then determine if it is healthy. In order to classify the image into a specific body part class, the body parts dataset must be split into test and training sets. We can then use many models, like neural networks or logistic regression models, and fit them using the training set. Now, using the test set, we can obtain a realistic accuracy the models will have on images in the real world since these testing images have never been seen by the models before. In order to increase this testing accuracy, we can also apply many complex algorithms to the models, like multiplicative weight update. For the second part of the problem, to determine if the body part is healthy, we can have another dataset consisting of healthy and non-healthy images of the specific body part and once again split that into the test and training sets. We then use another neural network to train on those training set images and use the testing set to figure out its accuracy. We will do this process only for the chest images. A major conclusion reached is that convolutional neural networks are the most reliable and accurate at image classification. In classifying the images, the logistic regression model, the neural network, neural networks with multiplicative weight update, neural networks with the black box algorithm, and the convolutional neural network achieved 96.83 percent accuracy, 97.33 percent accuracy, 97.83 percent accuracy, 96.67 percent accuracy, and 98.83 percent accuracy, respectively. On the other hand, the overall accuracy of the model that de-termines if the images are healthy or not is around 78.37 percent accuracy.Keywords: body part, healthcare, machine learning, neural networks
Procedia PDF Downloads 1031133 Preliminary Evaluation of Maximum Intensity Projection SPECT Imaging for Whole Body Tc-99m Hydroxymethylene Diphosphonate Bone Scanning
Authors: Yasuyuki Takahashi, Hirotaka Shimada, Kyoko Saito
Abstract:
Bone scintigraphy is widely used as a screening tool for bone metastases. However, the 180 to 240 minutes (min) waiting time after the intravenous (i.v.) injection of the tracer is both long and tiresome. To solve this shortcoming, a bone scan with a shorter waiting time is needed. In this study, we applied the Maximum Intensity Projection (MIP) and triple energy window (TEW) scatter correction to a whole body bone SPECT (Merged SPECT) and investigated shortening the waiting time. Methods: In a preliminary phantom study, hot gels of 99mTc-HMDP were inserted into sets of rods with diameters ranging from 4 to 19 mm. Each rod set covered a sector of a cylindrical phantom. The activity concentration of all rods was 2.5 times that of the background in the cylindrical body of the phantom. In the human study, SPECT images were obtained from chest to abdomen at 30 to 180 min after 99mTc- hydroxymethylene diphosphonate (HMDP) injection of healthy volunteers. For both studies, MIP images were reconstructed. Planar whole body images of the patients were also obtained. These were acquired at 200 min. The image quality of the SPECT and the planar images was compared. Additionally, 36 patients with breast cancer were scanned in the same way. The delectability of uptake regions (metastases) was compared visually. Results: In the phantom study, a 4 mm size hot gel was difficult to depict on the conventional SPECT, but MIP images could recognize it clearly. For both the healthy volunteers and the clinical patients, the accumulation of 99mTc-HMDP in the SPECT was good as early as 90 min. All findings of both image sets were in agreement. Conclusion: In phantoms, images from MIP with TEW scatter correction could detect all rods down to those with a diameter of 4 mm. In patients, MIP reconstruction with TEW scatter correction could improve the detectability of hot lesions. In addition, the time between injection and imaging could be shortened from that conventionally used for whole body scans.Keywords: merged SPECT, MIP, TEW scatter correction, 99mTc-HMDP
Procedia PDF Downloads 4111132 Implementation of Algorithm K-Means for Grouping District/City in Central Java Based on Macro Economic Indicators
Authors: Nur Aziza Luxfiati
Abstract:
Clustering is partitioning data sets into sub-sets or groups in such a way that elements certain properties have shared property settings with a high level of similarity within one group and a low level of similarity between groups. . The K-Means algorithm is one of thealgorithmsclustering as a grouping tool that is most widely used in scientific and industrial applications because the basic idea of the kalgorithm is-means very simple. In this research, applying the technique of clustering using the k-means algorithm as a method of solving the problem of national development imbalances between regions in Central Java Province based on macroeconomic indicators. The data sample used is secondary data obtained from the Central Java Provincial Statistics Agency regarding macroeconomic indicator data which is part of the publication of the 2019 National Socio-Economic Survey (Susenas) data. score and determine the number of clusters (k) using the elbow method. After the clustering process is carried out, the validation is tested using themethodsBetween-Class Variation (BCV) and Within-Class Variation (WCV). The results showed that detection outlier using z-score normalization showed no outliers. In addition, the results of the clustering test obtained a ratio value that was not high, namely 0.011%. There are two district/city clusters in Central Java Province which have economic similarities based on the variables used, namely the first cluster with a high economic level consisting of 13 districts/cities and theclustersecondwith a low economic level consisting of 22 districts/cities. And in the cluster second, namely, between low economies, the authors grouped districts/cities based on similarities to macroeconomic indicators such as 20 districts of Gross Regional Domestic Product, with a Poverty Depth Index of 19 districts, with 5 districts in Human Development, and as many as Open Unemployment Rate. 10 districts.Keywords: clustering, K-Means algorithm, macroeconomic indicators, inequality, national development
Procedia PDF Downloads 1581131 Inverse Scattering for a Second-Order Discrete System via Transmission Eigenvalues
Authors: Abdon Choque-Rivero
Abstract:
The Jacobi system with the Dirichlet boundary condition is considered on a half-line lattice when the coefficients are real valued. The inverse problem of recovery of the coefficients from various data sets containing the so-called transmission eigenvalues is analyzed. The Marchenko method is utilized to solve the corresponding inverse problem.Keywords: inverse scattering, discrete system, transmission eigenvalues, Marchenko method
Procedia PDF Downloads 1441130 Constructing the Joint Mean-Variance Regions for Univariate and Bivariate Normal Distributions: Approach Based on the Measure of Cumulative Distribution Functions
Authors: Valerii Dashuk
Abstract:
The usage of the confidence intervals in economics and econometrics is widespread. To be able to investigate a random variable more thoroughly, joint tests are applied. One of such examples is joint mean-variance test. A new approach for testing such hypotheses and constructing confidence sets is introduced. Exploring both the value of the random variable and its deviation with the help of this technique allows checking simultaneously the shift and the probability of that shift (i.e., portfolio risks). Another application is based on the normal distribution, which is fully defined by mean and variance, therefore could be tested using the introduced approach. This method is based on the difference of probability density functions. The starting point is two sets of normal distribution parameters that should be compared (whether they may be considered as identical with given significance level). Then the absolute difference in probabilities at each 'point' of the domain of these distributions is calculated. This measure is transformed to a function of cumulative distribution functions and compared to the critical values. Critical values table was designed from the simulations. The approach was compared with the other techniques for the univariate case. It differs qualitatively and quantitatively in easiness of implementation, computation speed, accuracy of the critical region (theoretical vs. real significance level). Stable results when working with outliers and non-normal distributions, as well as scaling possibilities, are also strong sides of the method. The main advantage of this approach is the possibility to extend it to infinite-dimension case, which was not possible in the most of the previous works. At the moment expansion to 2-dimensional state is done and it allows to test jointly up to 5 parameters. Therefore the derived technique is equivalent to classic tests in standard situations but gives more efficient alternatives in nonstandard problems and on big amounts of data.Keywords: confidence set, cumulative distribution function, hypotheses testing, normal distribution, probability density function
Procedia PDF Downloads 1741129 Mitochondrial Apolipoprotein A-1 Binding Protein Promotes Repolarization of Inflammatory Macrophage by Repairing Mitochondrial Respiration
Authors: Hainan Chen, Jina Qing, Xiao Zhu, Ling Gao, Ampadu O. Jackson, Min Zhang, Kai Yin
Abstract:
Objective: Editing macrophage activation to dampen inflammatory diseases by promoting the repolarization of inflammatory (M1) macrophages to anti-inflammatory (M2) macrophages is highly associated with mitochondrial respiration. Recent studies have suggested that mitochondrial apolipoprotein A-1 binding protein (APOA1BP) was essential for the cellular metabolite NADHX repair to NADH, which is necessary for the mitochondrial function. The exact role of APOA1BP in the repolarization of M1 to M2, however, is uncertain. Material and method: THP-1-derived macrophages were incubated with LPS (10 ng/ml) or/and IL-4 (100 U/ml) for 24 hours. Biochemical parameters of oxidative phosphorylation and M1/M2 markers were analyzed after overexpression of APOA1BP in cells. Results: Compared with control and IL-4-exposed M2 cells, APOA1BP was downregulated in M1 macrophages. APOA1BP restored the decline in mitochondrial function to improve metabolic and phenotypic reprogramming of M1 to M2 macrophages. Blocking oxidative phosphorylation by oligomycin blunts the effects of APOA1BP on M1 to M2 repolarization. Mechanistically, LPS triggered the hydration of NADH and increased its hydrate NADHX which inhibit cellular NADH dehydrogenases, a key component of electron transport chain for oxidative phosphorylation. APOA1BP decreased the level of NADHX via converting R-NADHX to biologically useful S-NADHX. The mutant of APOA1BP aspartate188, the binding site of NADHX, fail to repair oxidative phosphorylation, thereby preventing repolarization. Conclusions: Restoring mitochondrial function by increasing mitochondrial APOA1BP might be useful to improve the reprogramming of inflammatory macrophages into anti-inflammatory cells to control inflammatory diseases.Keywords: inflammatory diseases, macrophage repolarization, mitochondrial respiration, apolipoprotein A-1 binding protein, NADHX, NADH
Procedia PDF Downloads 1721128 Antibacterial Effect of Hydroalcoholic Extract of Salvia Officinalis and, Mentha Pulegium on Three Strains of Streptococcus Mutants, Lactobacillus Rhamnosus and, Actinomyces Viscosus Dental Caries in-vitro
Authors: H. Nabahat, E. Amiri, F. AzaditalabDavoudabadi, N. Zaeri
Abstract:
Tooth decay is one of the most common forms of oral and dental illness in the world, which causes huge costs of treatment, especially in high-risk groups such as people with oral dry mouth, prevention and control of it are very important. The use of traditional treatments such as extraction of drugs from medicinal plants is of paramount importance to Iran and the international community as well. The present study was conducted with the aim of investigating the antibacterial effect of the extract of Salvia officinalis and Mentha pulegium, which are the most commonly used drugs in the treatment of oral and teeth bacterial (Streptococcus mutant, Lactobacillus rhamnosis, and Actinomyces viscosis) in vitro method. In this experimental study, two herbs of Salvia and Mentha were prepared by maceration of hydroalcoholic extract, and the antibacterial effect was evaluated by broth macro dilution on streptococcal mutagen bacteria, lactobacillus rhamnosis, and viscose actinomycosis. The results were analyzed by the Whitney Mann test (P > 0.05). The results showed that the minimum inhibitory concentration (MIC) of the salmonella extract for Streptococcus mutan were 6.25 and 12.5 μg/ml, respectively, for lactobacillus of 1.56 and 3.12 μg/ml, respectively, and for actinomycosis viscose, The order of 12.5 and 100 μg/ml was obtained. As a result, broth macro dilution showed that both extracts of Salvia and Mentha had an inhibitory effect on all three species of bacteria. This effect for Salvia was significantly (P < 0.05) more than Mentha and was within the concentration range of both the extracts and had a bactericidal effect on all three bacteria.Keywords: antibacterial effect, dental bacteria, herbal extracts , salvia officinalis, mentha pulegium
Procedia PDF Downloads 1521127 Parallel Multisplitting Methods for Differential Systems
Authors: Malika El Kyal, Ahmed Machmoum
Abstract:
We prove the superlinear convergence of asynchronous multi-splitting methods applied to differential equations. This study is based on the technique of nested sets. It permits to specify kind of the convergence in the asynchronous mode.The main characteristic of an asynchronous mode is that the local algorithm not have to wait at predetermined messages to become available. We allow some processors to communicate more frequently than others, and we allow the communication delays to be substantial and unpredictable. Note that synchronous algorithms in the computer science sense are particular cases of our formulation of asynchronous one.Keywords: parallel methods, asynchronous mode, multisplitting, ODE
Procedia PDF Downloads 5261126 CAP-Glycine Protein Governs Growth, Differentiation, and the Pathogenicity of Global Meningoencephalitis Fungi
Authors: Kyung-Tae Lee, Li Li Wang, Kwang-Woo Jung, Yong-Sun Bahn
Abstract:
Microtubules are involved in mechanical support, cytoplasmic organization as well as in a number of cellular processes by interacting with diverse microtubule-associated proteins (MAPs), such as plus-end tracking proteins, motor proteins, and tubulin-folding cofactors. A common feature of these proteins is the presence of a cytoskeleton-associated protein-glycine-rich (CAP-Gly) domain, which is evolutionarily conserved and generally considered to bind to α-tubulin to regulate functions of microtubules. However, there has been a dearth of research on CAP-Gly proteins in fungal pathogens, including Cryptococcus neoformans, which causes fatal meningoencephalitis globally. In this study, we identified five CAP-Gly proteins encoding genes in C. neoformans. Among these, Cgp1, encoded by CNAG_06352, has a unique domain structure that has not been reported before in other eukaryotes. Supporting the role of Cpg1 in microtubule-related functions, we demonstrate that deletion or overexpression of CGP1 alters cellular susceptibility to thiabendazole, a microtubule destabilizer, and Cgp1 is co-localized with cytoplasmic microtubules. Related to the cellular functions of microtubules, Cgp1 also governs maintenance of membrane stability and genotoxic stress responses. Furthermore, we demonstrate that Cgp1 uniquely regulates sexual differentiation of C. neoformans with distinct roles in the early and late stage of mating. Our domain analysis reveals that the CAP-Gly domain plays major roles in all the functions of Cgp1. Finally, the cgp1Δ mutant is attenuated in virulence. In conclusion, this novel CAP-Gly protein, Cgp1, has pleotropic roles in regulating growth, stress responses, differentiation and pathogenicity of C. neoformans.Keywords: human fungal pathogen, CAP-Glycine protein, microtubule, meningoencephalitis
Procedia PDF Downloads 3151125 Marriage Domination and Divorce Domination in Graphs
Authors: Mark L. Caay, Rodolfo E. Maza
Abstract:
In this paper, the authors define two new variants of domination in graphs: the marriage and the divorce domination. A subset S ⊆ V (G) is said to be a marriage dominating set of G if for every e ∈ E(G), there exists a u ∈ V (G) such that u is one of the end vertex of e. A marriage dominating set S ⊆ V (G) is said to be a divorce dominating set of G if G\S is a disconnected graph. In this study, the authors present conditions of graphs for which the marriage and the divorce domination will take place and for which the two sets will coincide. Furthermore, the author gives the necessary and sufficient conditions for marriage domination to avoid divorce.Keywords: domination, decomposition, marriage domination, divorce domination, marriage theorem
Procedia PDF Downloads 171124 Algebras over an Integral Domain and Immediate Neighbors
Authors: Shai Sarussi
Abstract:
Let S be an integral domain with field of fractions F and let A be an F-algebra. An S-subalgebra R of A is called S-nice if R∩F = S and the localization of R with respect to S \{0} is A. Denoting by W the set of all S-nice subalgebras of A, and defining a notion of open sets on W, one can view W as a T0-Alexandroff space. A characterization of the property of immediate neighbors in an Alexandroff topological space is given, in terms of closed and open subsets of appropriate subspaces. Moreover, two special subspaces of W are introduced, and a way in which their closed and open subsets induce W is presented.Keywords: integral domains, Alexandroff topology, immediate neighbors, valuation domains
Procedia PDF Downloads 1771123 Two Brazilian Medeas: The Cases of Mata Teu Pai and Medeia Negra
Authors: Jaqueline Bohn Donada
Abstract:
The significance of Euripides’ Medea for contemporary literature is noticeable. Even if the bulk of Classical Reception studies does not tend to look carefully and consistently to the literature produced outside the Anglophone world, Brazilian literature offers abundant materials for such studies. Indeed, a certain Classical background can be observed in Brazilian literature at least since 1975 when Gota d’Água [The Final Straw, in English], a play that recreates the story of Medea and sets it in a favela in Rio de Janeiro. Also worthy of notice is Ivo Bender’s Trilogia Perversa [Perverse Trilogy, in English], a series of three historical plays set in Southern Brazil and based on Aeschylus’ Oresteia and on Euripides’ Iphigenia in Aulis published in the 1980s. Since then, a number of works directly inspired by the plays of Aeschylus, Sophocles and Euripides have been published, not to mention several adaptations of Homer’s two epic poems. This paper proposes a comparative analysis of two such works: Grace Passô’s 2017 play Mata teu Pai [Kill your father, in English] and Marcia Lima’s 2019 play Medeia Negra [Black Medea, in English] from the perspective of Classical Reception Studies in an intersection with feminist literary criticism. The paper intends to look at the endurance of Euripides’ character in contemporary Brazilian literature with a focus on how the character seems to have acquired special relevance to the treatment of pressing issues of the twenty-first century. Whereas Grace Passô’s play sets Medea at the center of a group of immigrant women, Marcia Limma has the character enact the dilemmas of incarcerated women in Brazil. The hypothesis that this research aims at testing is that both artists preserve the pathos of Euripides’s original character at the same time that they recreate his Medea in concrete circumstances of Brazilian contemporary social reality. At the end, the research aims at stating the significance of the Medea theme to contemporary Brazilian literature.Keywords: Euripides, Medea, Grace Passô, Marcia Limma, Brazilian literature
Procedia PDF Downloads 1311122 Estimation of Fragility Curves Using Proposed Ground Motion Selection and Scaling Procedure
Authors: Esra Zengin, Sinan Akkar
Abstract:
Reliable and accurate prediction of nonlinear structural response requires specification of appropriate earthquake ground motions to be used in nonlinear time history analysis. The current research has mainly focused on selection and manipulation of real earthquake records that can be seen as the most critical step in the performance based seismic design and assessment of the structures. Utilizing amplitude scaled ground motions that matches with the target spectra is commonly used technique for the estimation of nonlinear structural response. Representative ground motion ensembles are selected to match target spectrum such as scenario-based spectrum derived from ground motion prediction equations, Uniform Hazard Spectrum (UHS), Conditional Mean Spectrum (CMS) or Conditional Spectrum (CS). Different sets of criteria exist among those developed methodologies to select and scale ground motions with the objective of obtaining robust estimation of the structural performance. This study presents ground motion selection and scaling procedure that considers the spectral variability at target demand with the level of ground motion dispersion. The proposed methodology provides a set of ground motions whose response spectra match target median and corresponding variance within a specified period interval. The efficient and simple algorithm is used to assemble the ground motion sets. The scaling stage is based on the minimization of the error between scaled median and the target spectra where the dispersion of the earthquake shaking is preserved along the period interval. The impact of the spectral variability on nonlinear response distribution is investigated at the level of inelastic single degree of freedom systems. In order to see the effect of different selection and scaling methodologies on fragility curve estimations, results are compared with those obtained by CMS-based scaling methodology. The variability in fragility curves due to the consideration of dispersion in ground motion selection process is also examined.Keywords: ground motion selection, scaling, uncertainty, fragility curve
Procedia PDF Downloads 5831121 Characterization of the Groundwater Aquifers at El Sadat City by Joint Inversion of VES and TEM Data
Authors: Usama Massoud, Abeer A. Kenawy, El-Said A. Ragab, Abbas M. Abbas, Heba M. El-Kosery
Abstract:
Vertical Electrical Sounding (VES) and Transient Electro Magnetic (TEM) survey have been applied for characterizing the groundwater aquifers at El Sadat industrial area. El-Sadat city is one of the most important industrial cities in Egypt. It has been constructed more than three decades ago at about 80 km northwest of Cairo along the Cairo–Alexandria desert road. Groundwater is the main source of water supplies required for domestic, municipal, and industrial activities in this area due to the lack of surface water sources. So, it is important to maintain this vital resource in order to sustain the development plans of this city. In this study, VES and TEM data were identically measured at 24 stations along three profiles trending NE–SW with the elongation of the study area. The measuring points were arranged in a grid like pattern with both inter-station spacing and line–line distance of about 2 km. After performing the necessary processing steps, the VES and TEM data sets were inverted individually to multi-layer models, followed by a joint inversion of both data sets. Joint inversion process has succeeded to overcome the model-equivalence problem encountered in the inversion of individual data set. Then, the joint models were used for the construction of a number of cross sections and contour maps showing the lateral and vertical distribution of the geo-electrical parameters in the subsurface medium. Interpretation of the obtained results and correlation with the available geological and hydrogeological information revealed TWO aquifer systems in the area. The shallow Pleistocene aquifer consists of sand and gravel saturated with fresh water and exhibits large thickness exceeding 200 m. The deep Pliocene aquifer is composed of clay and sand and shows low resistivity values. The water bearing layer of the Pleistocene aquifer and the upper surface of Pliocene aquifer are continuous and no structural features have cut this continuity through the investigated area.Keywords: El Sadat city, joint inversion, VES, TEM
Procedia PDF Downloads 3701120 How Manufacturing Firm Manages Information Security: Need Pull and Technology Push Perspective
Authors: Geuna Kim, Sanghyun Kim
Abstract:
This study investigates various factors that may influence the ISM process, including the organization’s internal needs and external pressure, and examines the role of regulatory pressure in ISM development and performance. The 105 sets of data collected in a survey were tested against the research model using SEM. The results indicate that NP and TP had positive effects on the ISM process, except for perceived benefits. Regulatory pressure had a positive effect on the relationship between ISM awareness and ISM development and performance.Keywords: information security management, need pull, technology push, regulatory pressure
Procedia PDF Downloads 2971119 DNA Double-Strand Break–Capturing Nuclear Envelope Tubules Drive DNA Repair
Authors: Mitra Shokrollahi, Mia Stanic, Anisha Hundal, Janet N. Y. Chan, Defne Urman, Chris A. Jordan, Anne Hakem, Roderic Espin, Jun Hao, Rehna Krishnan, Philipp G. Maass, Brendan C. Dickson, Manoor P. Hande, Miquel A. Pujana, Razqallah Hakem, Karim Mekhail
Abstract:
Current models suggest that DNA double-strand breaks (DSBs) can move to the nuclear periphery for repair. It is unclear to what extent human DSBs display such repositioning. Here we show that the human nuclear envelope localizes to DSBs in a manner depending on DNA damage response (DDR) kinases and cytoplasmic microtubules acetylated by α-tubulin acetyltransferase-1 (ATAT1). These factors collaborate with the linker of nucleoskeleton and cytoskeleton complex (LINC), nuclear pore complex (NPC) protein NUP153, the nuclear lamina and kinesins KIF5B and KIF13B to generate DSB-capturing nuclear envelope tubules (dsbNETs). dsbNETs are partly supported by nuclear actin filaments and the circadian factor PER1 and reversed by kinesin KIFC3. Although dsbNETs promote repair and survival, they are also co-opted during poly (ADP-ribose) polymerase (PARP) inhibition to restrain BRCA1-deficient breast cancer cells and are hyper-induced in cells expressing the aging-linked lamin A mutant progerin. In summary, our results advance understanding of nuclear structure-function relationships, uncover a nuclear-cytoplasmic DDR and identify dsbNETs as critical factors in genome organization and stability.Keywords: DNA damage response, genome stability, nuclear envelope, cancer, age-related disorders
Procedia PDF Downloads 161118 Hamiltonian Paths and Cycles Passing through Prescribed Edges in the Balanced Hypercubes
Authors: Dongqin Cheng
Abstract:
The n-dimensional balanced hypercube BHn (n ≥ 1) has been proved to be a bipartite graph. Let P be a set of edges whose induced subgraph consists of pairwise vertex-disjoint paths. For any two vertices u, v from different partite sets of V (BHn). In this paper, we prove that if |P| ≤ 2n − 2 and the subgraph induced by P has neither u nor v as internal vertices, or both of u and v as end-vertices, then BHn contains a Hamiltonian path joining u and v passing through P. As a corollary, if |P| ≤ 2n−1, then the BHn contains a Hamiltonian cycle passing through P.Keywords: interconnection network, balanced hypercube, Hamiltonian cycle, prescribed edges
Procedia PDF Downloads 2051117 From Text to Data: Sentiment Analysis of Presidential Election Political Forums
Authors: Sergio V Davalos, Alison L. Watkins
Abstract:
User generated content (UGC) such as website post has data associated with it: time of the post, gender, location, type of device, and number of words. The text entered in user generated content (UGC) can provide a valuable dimension for analysis. In this research, each user post is treated as a collection of terms (words). In addition to the number of words per post, the frequency of each term is determined by post and by the sum of occurrences in all posts. This research focuses on one specific aspect of UGC: sentiment. Sentiment analysis (SA) was applied to the content (user posts) of two sets of political forums related to the US presidential elections for 2012 and 2016. Sentiment analysis results in deriving data from the text. This enables the subsequent application of data analytic methods. The SASA (SAIL/SAI Sentiment Analyzer) model was used for sentiment analysis. The application of SASA resulted with a sentiment score for each post. Based on the sentiment scores for the posts there are significant differences between the content and sentiment of the two sets for the 2012 and 2016 presidential election forums. In the 2012 forums, 38% of the forums started with positive sentiment and 16% with negative sentiment. In the 2016 forums, 29% started with positive sentiment and 15% with negative sentiment. There also were changes in sentiment over time. For both elections as the election got closer, the cumulative sentiment score became negative. The candidate who won each election was in the more posts than the losing candidates. In the case of Trump, there were more negative posts than Clinton’s highest number of posts which were positive. KNIME topic modeling was used to derive topics from the posts. There were also changes in topics and keyword emphasis over time. Initially, the political parties were the most referenced and as the election got closer the emphasis changed to the candidates. The performance of the SASA method proved to predict sentiment better than four other methods in Sentibench. The research resulted in deriving sentiment data from text. In combination with other data, the sentiment data provided insight and discovery about user sentiment in the US presidential elections for 2012 and 2016.Keywords: sentiment analysis, text mining, user generated content, US presidential elections
Procedia PDF Downloads 1921116 Methodology to Achieve Non-Cooperative Target Identification Using High Resolution Range Profiles
Authors: Olga Hernán-Vega, Patricia López-Rodríguez, David Escot-Bocanegra, Raúl Fernández-Recio, Ignacio Bravo
Abstract:
Non-Cooperative Target Identification has become a key research domain in the Defense industry since it provides the ability to recognize targets at long distance and under any weather condition. High Resolution Range Profiles, one-dimensional radar images where the reflectivity of a target is projected onto the radar line of sight, are widely used for identification of flying targets. According to that, to face this problem, an approach to Non-Cooperative Target Identification based on the exploitation of Singular Value Decomposition to a matrix of range profiles is presented. Target Identification based on one-dimensional radar images compares a collection of profiles of a given target, namely test set, with the profiles included in a pre-loaded database, namely training set. The classification is improved by using Singular Value Decomposition since it allows to model each aircraft as a subspace and to accomplish recognition in a transformed domain where the main features are easier to extract hence, reducing unwanted information such as noise. Singular Value Decomposition permits to define a signal subspace which contain the highest percentage of the energy, and a noise subspace which will be discarded. This way, only the valuable information of each target is used in the recognition process. The identification algorithm is based on finding the target that minimizes the angle between subspaces and takes place in a transformed domain. Two metrics, F1 and F2, based on Singular Value Decomposition are accomplished in the identification process. In the case of F2, the angle is weighted, since the top vectors set the importance in the contribution to the formation of a target signal, on the contrary F1 simply shows the evolution of the unweighted angle. In order to have a wide database or radar signatures and evaluate the performance, range profiles are obtained through numerical simulation of seven civil aircraft at defined trajectories taken from an actual measurement. Taking into account the nature of the datasets, the main drawback of using simulated profiles instead of actual measured profiles is that the former implies an ideal identification scenario, since measured profiles suffer from noise, clutter and other unwanted information and simulated profiles don't. In this case, the test and training samples have similar nature and usually a similar high signal-to-noise ratio, so as to assess the feasibility of the approach, the addition of noise has been considered before the creation of the test set. The identification results applying the unweighted and weighted metrics are analysed for demonstrating which algorithm provides the best robustness against noise in an actual possible scenario. So as to confirm the validity of the methodology, identification experiments of profiles coming from electromagnetic simulations are conducted, revealing promising results. Considering the dissimilarities between the test and training sets when noise is added, the recognition performance has been improved when weighting is applied. Future experiments with larger sets are expected to be conducted with the aim of finally using actual profiles as test sets in a real hostile situation.Keywords: HRRP, NCTI, simulated/synthetic database, SVD
Procedia PDF Downloads 3541115 Assessing the Impact of Climate Change on Pulses Production in Khyber Pakhtunkhwa, Pakistan
Authors: Khuram Nawaz Sadozai, Rizwan Ahmad, Munawar Raza Kazmi, Awais Habib
Abstract:
Climate change and crop production are intrinsically associated with each other. Therefore, this research study is designed to assess the impact of climate change on pulses production in Southern districts of Khyber Pakhtunkhwa (KP) Province of Pakistan. Two pulses (i.e. chickpea and mung bean) were selected for this research study with respect to climate change. Climatic variables such as temperature, humidity and precipitation along with pulses production and area under cultivation of pulses were encompassed as the major variables of this study. Secondary data of climatic variables and crop variables for the period of thirty four years (1986-2020) were obtained from Pakistan Metrological Department and Agriculture Statistics of KP respectively. Panel data set of chickpea and mung bean crops was estimated separately. The analysis validate that both data sets were a balanced panel data. The Hausman specification test was run separately for both the panel data sets whose findings had suggested the fixed effect model can be deemed as an appropriate model for chickpea panel data, however random effect model was appropriate for estimation of the panel data of mung bean. Major findings confirm that maximum temperature is statistically significant for the chickpea yield. This implies if maximum temperature increases by 1 0C, it can enhance the chickpea yield by 0.0463 units. However, the impact of precipitation was reported insignificant. Furthermore, the humidity was statistically significant and has a positive association with chickpea yield. In case of mung bean the minimum temperature was significantly contributing in the yield of mung bean. This study concludes that temperature and humidity can significantly contribute to enhance the pulses yield. It is recommended that capacity building of pulses growers may be made to adapt the climate change strategies. Moreover, government may ensure the availability of climate change resistant varieties of pulses to encourage the pulses cultivation.Keywords: climate change, pulses productivity, agriculture, Pakistan
Procedia PDF Downloads 441114 Pairwise Relative Primality of Integers and Independent Sets of Graphs
Authors: Jerry Hu
Abstract:
Let G = (V, E) with V = {1, 2, ..., k} be a graph, the k positive integers a₁, a₂, ..., ak are G-wise relatively prime if (aᵢ, aⱼ ) = 1 for {i, j} ∈ E. We use an inductive approach to give an asymptotic formula for the number of k-tuples of integers that are G-wise relatively prime. An exact formula is obtained for the probability that k positive integers are G-wise relatively prime. As a corollary, we also provide an exact formula for the probability that k positive integers have exactly r relatively prime pairs.Keywords: graph, independent set, G-wise relatively prime, probability
Procedia PDF Downloads 921113 Improving the Genetic Diversity of Soybean Seeds and Tolerance to Drought Irradiated with Gamma Rays
Authors: Aminah Muchdar
Abstract:
To increase the genetic diversity of soybean in order to adapt to agroecology in Indonesia conducted ways including introduction, cross, mutation and genetic transformation. The purpose of this research is to obtain early maturity soybean mutant lines, large seed tolerant to drought with high yield potential. This study consisted of two stages: the first is sensitivity of gamma rays carried out in the Laboratory BATAN. The genetic variety used is Anjasmoro. The method seeds irradiated with gamma rays at a rate of activity with the old ci 1046.16976 irradiation 0-71 minutes. Irradiation doses of 0, 100, 200, 300, 400, 500, 600, 700, 800, 900 and 1000gy. The results indicated all seeds irradiated with doses of 0 - 1000gy, just a dose of 200 and 300gy are able to show the percentage of germination, plant height, number of leaves, number of normal sprouts and green leaves of the best and can be continued for a second trial in order to assemble and to get mutants which is expected. The result of second stage of soybean M2 Population irradiated with diversity Gamma Irradiation performed that in the form of soybean planting, the seed planted is the first derivative of the M2 irradiated seeds. The result after the age of 30ADP has already showing growth and development of plants that vary when compared to its parent, both in terms of plant height, number of leaves, leaf shape and leaf forage level. In the generative phase, a plant that has been irradiated 200 and 300 gy seen some plants flower form packs, but not formed pods, there is also a form packs of flowers, but few pods produce soybean morphological characters such as plant height, number of branches, pods, days to flowering, harvesting, seed weight and seed number.Keywords: gamma ray, genetic mutation, irradiation, soybean
Procedia PDF Downloads 4001112 The Influence of Ecologically -Valid High- and Low-Volume Resistance Training on Muscle Strength and Size in Trained Men
Authors: Jason Dellatolla, Scott Thomas
Abstract:
Much of the current literature pertaining to resistance training (RT) volume prescription lacks ecological validity, and very few studies investigate true high-volume ranges. Purpose: The present study sought to investigate the effects of ecologically-valid high- vs low-volume RT on muscular size and strength in trained men. Methods: This study systematically randomized trained, college-aged men into two groups: low-volume (LV; n = 4) and high-volume (HV; n = 5). The sample size was affected by COVID-19 limitations. Subjects followed an ecologically-valid 6-week RT program targeting both muscle size and strength. RT occurred 3x/week on non-consecutive days. Over the course of six weeks, LVR and HVR gradually progressed from 15 to 23 sets/week and 30 to 46 sets/week of lower-body RT, respectively. Muscle strength was assessed via 3RM tests in the squat, stiff-leg deadlift (SL DL), and leg press. Muscle hypertrophy was evaluated through a combination of DXA, BodPod, and ultrasound (US) measurements. Results: Two-way repeated-measures ANOVAs indicated that strength in all 3 compound lifts increased significantly among both groups (p < 0.01); between-group differences only occurred in the squat (p = 0.02) and SL DL (p = 0.03), both of which favored HVR. Significant pre-to-post-study increases in indicators of hypertrophy were discovered for lean body mass in the legs via DXA, overall fat-free mass via BodPod, and US measures of muscle thickness (MT) for the rectus femoris, vastus intermedius, vastus medialis, vastus lateralis, long-head of the biceps femoris, and total MT. Between-group differences were only found for MT of the vastus medialis – favoring HVR. Moreover, each additional weekly set of lower-body RT was associated with an average increase in MT of 0.39% in the thigh muscles. Conclusion: We conclude that ecologically-valid RT regimens significantly improve muscular strength and indicators of hypertrophy. When HVR is compared to LVR, HVR provides significantly greater gains in muscular strength but has no greater effect on hypertrophy over the course of 6 weeks in trained, college-aged men.Keywords: ecological validity, hypertrophy, resistance training, strength
Procedia PDF Downloads 1141111 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning
Authors: Akeel A. Shah, Tong Zhang
Abstract:
Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning
Procedia PDF Downloads 40