Search results for: Difficult intubation in glottic cancer
964 Peer Bullying and Mentalization from the Perspective of Pupils
Authors: Anna Siegler
Abstract:
Bullying among peers is not uncommon; however, adults can notice only a fragment of the cases of harassment during everyday life. The systemic approaches of bullying investigation put the whole school community in the focus of attention and propose that the solution should emerge from the culture of the school. Bystanders are essential in the prevention and intervention processes as an active agent rather than passive. For combating exclusion, stigmatization and harassment, it is important that the bystanders have to realize they have the power to take action. To prevent the escalation of violence, victims must believe that students and teachers will help them and their environment is able to provide safety. The study based on scientific narrative psychological approach, and focuses on the examination of the different perspectives of students, how peers are mentalizing with each other in case of bullying. The data collection contained responses of students (N = 138) from three schools in Hungary, and from three different area of the country (Budapest, Martfű and Barcs). The test battery include Bullying Prevalence Questionnaire, Interpersonal Reactivity Index and an instruction to get narratives about bullying, which effectiveness was tested during a pilot test. The obtained results are in line with the findings of previous bullying research: the victims are mentalizing less with their peers and experience greater personal distress when they are in identity threatening situations, thus focusing on their own difficulties rather than social signals. This isolation is an adaptive response in short-term although it seems to lead to a deficit in social skills later in life and makes it difficult for students to become socially integrated to society. In addition the results also show that students use more mental state attribution when they report verbal bullying than in case of physical abuse. Those who witness physical harassment also witness concrete answers to the problem from teachers, in contrast verbal abuse often stays without consequences. According to the results students mentalizing more in these stories because they have less normative explanation to what happened. To expanding bullying literature, this research helps to find ways to reduce school violence through community development.Keywords: bullying, mentalization, narrative, school culture
Procedia PDF Downloads 164963 Synthesis, Characterization and Photocatalytic Activity of Electrospun Zinc and/or Titanium Oxide Nanofibers for Methylene Blue Degradation
Authors: Zainab Dahrouch, Beatrix Petrovičová, Claudia Triolo, Fabiola Pantò, Angela Malara, Salvatore Patanè, Maria Allegrini, Saveria Santangelo
Abstract:
Synthetic dyes dispersed in water cause environmental damage and have harmful effects on human health. Methylene blue (MB) is broadly used as a dye in the textile, pharmaceutical, printing, cosmetics, leather, and food industries. The complete removal of MB is difficult due to the presence of aromatic rings in its structure. The present study is focused on electrospun nanofibers (NFs) with engineered architecture and surface to be used as catalysts for the photodegradation of MB. Ti and/or Zn oxide NFs are produced by electrospinning precursor solutions with different Ti: Zn molar ratios (from 0:1 to 1:0). Subsequent calcination and cooling steps are operated at fast rates to generate porous NFs with capture centers to reduce the recombination rate of the photogenerated charges. The comparative evaluation of the NFs as photocatalysts for the removal of MB from an aqueous solution with a dye concentration of 15 µM under UV irradiation shows that the binary (wurtzite ZnO and anatase TiO₂) oxides exhibit higher catalytic activity compared to ternary (ZnTiO₃ and Zn₂TiO₄) oxides. The higher band gap and lower crystallinity of the ternary oxides are responsible for their lower photocatalytic activity. It has been found that the optimal load for the wurtzite ZnO is 0.66 mg mL⁻¹, obtaining a degradation rate of 7.94.10⁻² min⁻¹. The optimal load for anatase TiO₂ is lower (0.33 mg mL⁻¹) and the corresponding rate constant (1.12×10⁻¹ min⁻¹) is higher. This finding (higher activity with lower load) is of crucial importance for the scaling up of the process on an industrial scale. Indeed, the anatase NFs outperform even the commonly used P25-TiO₂ benchmark. Besides, they can be reused twice without any regeneration treatment, with 5.2% and 18.7% activity decrease after second and third use, respectively. Thanks to the scalability of the electrospinning technique, this laboratory-scale study provides a perspective towards the sustainable large-scale manufacture of photocatalysts for the treatment of industry effluents.Keywords: anatase, capture centers, methylene blue dye, nanofibers, photodegradation, zinc oxide
Procedia PDF Downloads 157962 Selenuranes as Cysteine Protease Inhibitors: Theorical Investigation on Model Systems
Authors: Gabriela D. Silva, Rodrigo L. O. R. Cunha, Mauricio D. Coutinho-Neto
Abstract:
In the last four decades the biological activities of selenium compounds has received great attention, particularly for hypervalent derivates from selenium (IV) used as enzyme inhibitors. The unregulated activity of cysteine proteases are related to the development of several pathologies, such as neurological disorders, cardiovascular diseases, obesity, rheumatoid arthritis, cancer and parasitic infections. These enzymes are therefore a valuable target for designing new small molecule inhibitors such as selenuranes. Even tough there has been advances in the synthesis and design of new selenuranes based inhibitors, little is known about their mechanism of action. It is a given that inhibition occurs through the reaction between the thiol group of the enzyme and the chalcogen atom. However, several open questions remain about the nature of the mechanism (associative vs. dissociative) and about the nature of the reactive species in solution under physiological conditions. In this work we performed a theoretical investigation on model systems to study the possible routes of substitution reactions. Nucleophiles may be present in biological systems, our interest is centered in the thiol groups from the cysteine proteases and the hydroxyls from the aqueous environment. We therefore expect this study to clarify the possibility of a route reaction in two stages, the first consisting of the substitution of chloro atoms by hydroxyl groups and then replacing these hydroxyl groups per thiol groups in selenuranes. The structures of selenuranes and nucleophiles were optimized using density function theory along the B3LYP functional and a 6-311+G(d) basis set. Solvent was treated using the IEFPCM method as implemented in the Gaussian 09 code. Our results indicate that hydrolysis from water react preferably with selenuranes, and then, they are replaced by the thiol group. It show the energy values of -106,0730423 kcal/mol for dople substituition by hydroxyl group and 96,63078511 kcal/mol for thiol group. The solvatation and pH reduction promotes this route, increasing the energy value for reaction with hydroxil group to -50,75637672 kcal/mol and decreasing the energy value for thiol to 7,917767189 kcal/mol. Alternative ways were analyzed for monosubstitution (considering the competition between Cl, OH and SH groups) and they suggest the same route. Similar results were obtained for aliphatic and aromatic selenuranes studied.Keywords: chalcogenes, computational study, cysteine proteases, enzyme inhibitors
Procedia PDF Downloads 305961 Applying Computer Simulation Methods to a Molecular Understanding of Flaviviruses Proteins towards Differential Serological Diagnostics and Therapeutic Intervention
Authors: Sergio Alejandro Cuevas, Catherine Etchebest, Fernando Luis Barroso Da Silva
Abstract:
The flavivirus genus has several organisms responsible for generating various diseases in humans. Special in Brazil, Zika (ZIKV), Dengue (DENV) and Yellow Fever (YFV) viruses have raised great health concerns due to the high number of cases affecting the area during the last years. Diagnostic is still a difficult issue since the clinical symptoms are highly similar. The understanding of their common structural/dynamical and biomolecular interactions features and differences might suggest alternative strategies towards differential serological diagnostics and therapeutic intervention. Due to their immunogenicity, the primary focus of this study was on the ZIKV, DENV and YFV non-structural proteins 1 (NS1) protein. By means of computational studies, we calculated the main physical chemical properties of this protein from different strains that are directly responsible for the biomolecular interactions and, therefore, can be related to the differential infectivity of the strains. We also mapped the electrostatic differences at both the sequence and structural levels for the strains from Uganda to Brazil that could suggest possible molecular mechanisms for the increase of the virulence of ZIKV. It is interesting to note that despite the small changes in the protein sequence due to the high sequence identity among the studied strains, the electrostatic properties are strongly impacted by the pH which also impact on their biomolecular interactions with partners and, consequently, the molecular viral biology. African and Asian strains are distinguishable. Exploring the interfaces used by NS1 to self-associate in different oligomeric states, and to interact with membranes and the antibody, we could map the strategy used by the ZIKV during its evolutionary process. This indicates possible molecular mechanisms that can explain the different immunological response. By the comparison with the known antibody structure available for the West Nile virus, we demonstrated that the antibody would have difficulties to neutralize the NS1 from the Brazilian strain. The present study also opens up perspectives to computationally design high specificity antibodies.Keywords: zika, biomolecular interactions, electrostatic interactions, molecular mechanisms
Procedia PDF Downloads 133960 Ethical 'Spaces': A Critical Analysis of the Medical, Ethical and Legal Complexities in the Treatment and Care of Unidentified and Critically Incapacitated Victims Following a Disaster
Authors: D. Osborn, L. Easthope
Abstract:
The increasing threat of ‘marauding terror,' utilising improvised explosive devices and firearms, has focused the attention of policy makers and emergency responders once again on the treatment of the critically injured patient in a highly volatile scenario. Whilst there have been significant improvements made in the response and lessons learned from recent disasters in the international disaster community there still remain areas of uncertainty and a lack of clarity in the care of the critically injured. This innovative, longitudinal study has at its heart the aim of using ethnographic methods to ‘slow down’ the journey such patients will take and make visible the ethical complexities that 2017 technologies, expectations and over a decade of improved combat medicine techniques have brought. The primary researcher, previously employed in the hospital emergency management environment, has closely followed responders as they managed casualties with life-threatening injuries. Ethnographic observation of Exercise Unified Response in March 2016, exposed the ethical and legal 'vacuums' within a mass casualty and fatality setting, specifically the extrication, treatment and care of critically injured patients from crushed and overturned train carriages. This article highlights a gap in the debate, evaluation, planning and response to an incident of this nature specifically the incapacitated, unidentified patients and the ethics of submitting them to the invasive ‘Disaster Victim Identification’ process. Using a qualitative ethnographic analysis, triangulating observation, interviews and documentation, this analysis explores the gaps and highlights the next stages in the researcher’s pathway as she continues to explore with emergency practitioners some of this century’s most difficult questions in relation to the medico-legal and ethical challenges faced by emergency services in the wake of new and emerging threats and medical treatment expectations.Keywords: ethics, disaster, Disaster Victim Identification (DVI), legality, unidentified
Procedia PDF Downloads 192959 Reliability Levels of Reinforced Concrete Bridges Obtained by Mixing Approaches
Authors: Adrián D. García-Soto, Alejandro Hernández-Martínez, Jesús G. Valdés-Vázquez, Reyna A. Vizguerra-Alvarez
Abstract:
Reinforced concrete bridges designed by code are intended to achieve target reliability levels adequate for the geographical environment where the code is applicable. Several methods can be used to estimate such reliability levels. Many of them require the establishment of an explicit limit state function (LSF). When such LSF is not available as a close-form expression, the simulation techniques are often employed. The simulation methods are computing intensive and time consuming. Note that if the reliability of real bridges designed by code is of interest, numerical schemes, the finite element method (FEM) or computational mechanics could be required. In these cases, it can be quite difficult (or impossible) to establish a close-form of the LSF, and the simulation techniques may be necessary to compute reliability levels. To overcome the need for a large number of simulations when no explicit LSF is available, the point estimate method (PEM) could be considered as an alternative. It has the advantage that only the probabilistic moments of the random variables are required. However, in the PEM, fitting of the resulting moments of the LSF to a probability density function (PDF) is needed. In the present study, a very simple alternative which allows the assessment of the reliability levels when no explicit LSF is available and without the need of extensive simulations is employed. The alternative includes the use of the PEM, and its applicability is shown by assessing reliability levels of reinforced concrete bridges in Mexico when a numerical scheme is required. Comparisons with results by using the Monte Carlo simulation (MCS) technique are included. To overcome the problem of approximating the probabilistic moments from the PEM to a PDF, a well-known distribution is employed. The approach mixes the PEM and other classic reliability method (first order reliability method, FORM). The results in the present study are in good agreement whit those computed with the MCS. Therefore, the alternative of mixing the reliability methods is a very valuable option to determine reliability levels when no close form of the LSF is available, or if numerical schemes, the FEM or computational mechanics are employed.Keywords: structural reliability, reinforced concrete bridges, combined approach, point estimate method, monte carlo simulation
Procedia PDF Downloads 346958 Simulation of the FDA Centrifugal Blood Pump Using High Performance Computing
Authors: Mehdi Behbahani, Sebastian Rible, Charles Moulinec, Yvan Fournier, Mike Nicolai, Paolo Crosetto
Abstract:
Computational Fluid Dynamics blood-flow simulations are increasingly used to develop and validate blood-contacting medical devices. This study shows that numerical simulations can provide additional and accurate estimates of relevant hemodynamic indicators (e.g., recirculation zones or wall shear stresses), which may be difficult and expensive to obtain from in-vivo or in-vitro experiments. The most recent FDA (Food and Drug Administration) benchmark consisted of a simplified centrifugal blood pump model that contains fluid flow features as they are commonly found in these devices with a clear focus on highly turbulent phenomena. The FDA centrifugal blood pump study is composed of six test cases with different volumetric flow rates ranging from 2.5 to 7.0 liters per minute, pump speeds, and Reynolds numbers ranging from 210,000 to 293,000. Within the frame of this study different turbulence models were tested including RANS models, e.g. k-omega, k-epsilon and a Reynolds Stress Model (RSM) and, LES. The partitioners Hilbert, METIS, ParMETIS and SCOTCH were used to create an unstructured mesh of 76 million elements and compared in their efficiency. Computations were performed on the JUQUEEN BG/Q architecture applying the highly parallel flow solver Code SATURNE and typically using 32768 or more processors in parallel. Visualisations were performed by means of PARAVIEW. Different turbulence models including all six flow situations could be successfully analysed and validated against analytical considerations and from comparison to other data-bases. It showed that an RSM represents an appropriate choice with respect to modeling high-Reynolds number flow cases. Especially, the Rij-SSG (Speziale, Sarkar, Gatzki) variant turned out to be a good approach. Visualisation of complex flow features could be obtained and the flow situation inside the pump could be characterized.Keywords: blood flow, centrifugal blood pump, high performance computing, scalability, turbulence
Procedia PDF Downloads 382957 Leça da Palmeira Revisited: Sixty-Seven Years of Recurring Work by Álvaro Siza
Authors: Eduardo Jorge Cabral dos Santos Fernandes
Abstract:
Over the last sixty-seven years, Portuguese architect Álvaro Siza Vieira designed several interventions for the Leça da Palmeira waterfront. With this paper, we aim to analyze the history of this set of projects in a chronological approach, seeking to understand the connections that can be established between them. Born in Matosinhos, a fishing and industrial village located near Porto, Álvaro Siza built a remarkable relationship with Leça da Palmeira (a neighboring village located to the north) from a personal and professional point of view throughout his life: it was there that he got married (in the small chapel located next to the Boa Nova lighthouse) and it was there that he designed his first works of great impact, the Boa Nova Tea House and the Ocean Swimming Pool, today classified as national monuments. These two works were the subject of several projects spaced over time, including recent restoration interventions designed by the same author. However, the marks of Siza's intervention in this territory are not limited to these two cases; there were other projects designed for this territory, which we also intend to analyze: the monument to the poet António Nobre (1967-80), the unbuilt project for a restaurant next to Piscina das Marés (presented in 1966 and redesigned in 1993), the reorganization of the Avenida da Liberdade (with a first project, not carried out, in 1965-74, and a reformulation carried out between 1998 and 2006) and, finally, the project for the new APDL facilities, which completes Avenida da Liberdade to the south (1995). Altogether, these interventions are so striking in this territory, from a landscape, formal, functional, and tectonic point of view, that it is difficult to imagine this waterfront without their presence. In all cases, the relationship with the site explains many of the design options. Time after time, the conditions of the pre-existing territory (also affected by the previous interventions of Siza) were considered, so each project created a new circumstance, conditioning the following interventions. This paper is part of a more comprehensive project, which aims to analyze the work of Álvaro Siza in its fundamental relationship with the site.Keywords: Álvaro Siza, contextualism, Leça da Palmeira, landscape
Procedia PDF Downloads 33956 Parameter Estimation of Gumbel Distribution with Maximum-Likelihood Based on Broyden Fletcher Goldfarb Shanno Quasi-Newton
Authors: Dewi Retno Sari Saputro, Purnami Widyaningsih, Hendrika Handayani
Abstract:
Extreme data on an observation can occur due to unusual circumstances in the observation. The data can provide important information that can’t be provided by other data so that its existence needs to be further investigated. The method for obtaining extreme data is one of them using maxima block method. The distribution of extreme data sets taken with the maxima block method is called the distribution of extreme values. Distribution of extreme values is Gumbel distribution with two parameters. The parameter estimation of Gumbel distribution with maximum likelihood method (ML) is difficult to determine its exact value so that it is necessary to solve the approach. The purpose of this study was to determine the parameter estimation of Gumbel distribution with quasi-Newton BFGS method. The quasi-Newton BFGS method is a numerical method used for nonlinear function optimization without constraint so that the method can be used for parameter estimation from Gumbel distribution whose distribution function is in the form of exponential doubel function. The quasi-New BFGS method is a development of the Newton method. The Newton method uses the second derivative to calculate the parameter value changes on each iteration. Newton's method is then modified with the addition of a step length to provide a guarantee of convergence when the second derivative requires complex calculations. In the quasi-Newton BFGS method, Newton's method is modified by updating both derivatives on each iteration. The parameter estimation of the Gumbel distribution by a numerical approach using the quasi-Newton BFGS method is done by calculating the parameter values that make the distribution function maximum. In this method, we need gradient vector and hessian matrix. This research is a theory research and application by studying several journals and textbooks. The results of this study obtained the quasi-Newton BFGS algorithm and estimation of Gumbel distribution parameters. The estimation method is then applied to daily rainfall data in Purworejo District to estimate the distribution parameters. This indicates that the high rainfall that occurred in Purworejo District decreased its intensity and the range of rainfall that occurred decreased.Keywords: parameter estimation, Gumbel distribution, maximum likelihood, broyden fletcher goldfarb shanno (BFGS)quasi newton
Procedia PDF Downloads 326955 Landslide Susceptibility Mapping Using Soft Computing in Amhara Saint
Authors: Semachew M. Kassa, Africa M Geremew, Tezera F. Azmatch, Nandyala Darga Kumar
Abstract:
Frequency ratio (FR) and analytical hierarchy process (AHP) methods are developed based on past landslide failure points to identify the landslide susceptibility mapping because landslides can seriously harm both the environment and society. However, it is still difficult to select the most efficient method and correctly identify the main driving factors for particular regions. In this study, we used fourteen landslide conditioning factors (LCFs) and five soft computing algorithms, including Random Forest (RF), Support Vector Machine (SVM), Logistic Regression (LR), Artificial Neural Network (ANN), and Naïve Bayes (NB), to predict the landslide susceptibility at 12.5 m spatial scale. The performance of the RF (F1-score: 0.88, AUC: 0.94), ANN (F1-score: 0.85, AUC: 0.92), and SVM (F1-score: 0.82, AUC: 0.86) methods was significantly better than the LR (F1-score: 0.75, AUC: 0.76) and NB (F1-score: 0.73, AUC: 0.75) method, according to the classification results based on inventory landslide points. The findings also showed that around 35% of the study region was made up of places with high and very high landslide risk (susceptibility greater than 0.5). The very high-risk locations were primarily found in the western and southeastern regions, and all five models showed good agreement and similar geographic distribution patterns in landslide susceptibility. The towns with the highest landslide risk include Amhara Saint Town's western part, the Northern part, and St. Gebreal Church villages, with mean susceptibility values greater than 0.5. However, rainfall, distance to road, and slope were typically among the top leading factors for most villages. The primary contributing factors to landslide vulnerability were slightly varied for the five models. Decision-makers and policy planners can use the information from our study to make informed decisions and establish policies. It also suggests that various places should take different safeguards to reduce or prevent serious damage from landslide events.Keywords: artificial neural network, logistic regression, landslide susceptibility, naïve Bayes, random forest, support vector machine
Procedia PDF Downloads 84954 Evaluation of the Effect of Lactose Derived Monosaccharide on Galactooligosaccharides Production by β-Galactosidase
Authors: Yenny Paola Morales Cortés, Fabián Rico Rodríguez, Juan Carlos Serrato Bermúdez, Carlos Arturo Martínez Riascos
Abstract:
Numerous benefits of galactooligosaccharides (GOS) as prebiotics have motivated the study of enzymatic processes for their production. These processes have special complexities due to several factors that make difficult high productivity, such as enzyme type, reaction medium pH, substrate concentrations and presence of inhibitors, among others. In the present work the production of galactooligosaccharides (with different degrees of polymerization: two, three and four) from lactose was studied. The study considers the formulation of a mathematical model that predicts the production of GOS from lactose using the enzyme β-galactosidase. The effect of pH in the reaction was studied. For that, phosphate buffer was used and with this was evaluated three pH values (6.0.6.5 and 7.0). Thus it was observed that at pH 6.0 the enzymatic activity insignificant. On the other hand, at pH 7.0 the enzymatic activity was approximately 27 times greater than at 6.5. The last result differs from previously reported results. Therefore, pH 7.0 was chosen as working pH. Additionally, the enzyme concentration was analyzed, which allowed observing that the effect of the concentration depends on the pH and the concentration was set for the following studies in 0.272 mM. Afterwards, experiments were performed varying the lactose concentration to evaluate its effects on the process and to generate the data for the adjustment of the mathematical model parameters. The mathematical model considers the reactions of lactose hydrolysis and transgalactosylation for the production of disaccharides and trisaccharides, with their inverse reactions. The production of tetrasaccharides was negligible and, because of that, it was not included in the model. The reaction was monitored by HPLC and for the quantitative analysis of the experimental data the Matlab programming language was used, including solvers for differential equations systems integration (ode15s) and nonlinear problems optimization (fminunc). The results confirm that the transgalactosylation and hydrolysis reactions are reversible, additionally inhibition by glucose and galactose is observed on the production of GOS. In relation to the production process of galactooligosaccharides, the results show that it is necessary to have high initial concentrations of lactose considering that favors the transgalactosylation reaction, while low concentrations favor hydrolysis reactions.Keywords: β-galactosidase, galactooligosaccharides, inhibition, lactose, Matlab, modeling
Procedia PDF Downloads 358953 Effect of Auraptene on the Enzymatic Glutathione Redox-System in Nrf2 Knockout Mice
Authors: Ludmila A. Gavriliuc, Jerry McLarty, Heather E. Kleiner, J. Michael Mathis
Abstract:
Abstract -- Background: The citrus coumarine Auraptene (Aur) is an effective chemopreventive agent, as manifested in many models of diseases and cancer. Nuclear factor erythroid 2-related factor (Nrf2) is an important regulator of genes induced by oxidative stress, such as glutathione S-transferases, heme oxygenase-1, and peroxiredoxin 1, by activating the antioxidant response element (ARE). Genetic and biochemical evidence has demonstrated that glutathione (GSH) and glutathione-dependent enzymes, glutathione reductase (GR), glutathione peroxidases (GPs), glutathione S-transferases (GSTs) are responsible for the control of intracellular reduction-oxidation status and participate in cellular adaptation to oxidative stress. The effect of Aur on the activity of GR, GPs (Se-GP and Se-iGP), and content of GSH in the liver, kidney, and spleen is insufficiently explored. Aim: Our goal was the examination of the Aur influence on the redox-system of GSH in Nrf2 wild type and Nrf2 knockout mice via activation of Nrf2 and ARE. Methods: Twenty female mice, 10 Nrf2 wild-type (WT) and 10 Nrf2 (-/-) knockout (KO), were bred and genotyped for our study. The activity of GR, Se-GP, Se-iGP, GST, G6PD, CytP450 reductase, catalase (Cat), and content of GSH were analyzed in the liver, kidney, and spleen using Spectrophotometry methods. The results of the specific activity of enzymes and the amount of GSH were analyzed with ANOVA and Spearman statistical methods. Results: Aur (200 mg/kg) treatment induced hepatic GST, GR, Se-GP activity and inhibited their activity in the spleen of mice, most likely via activation of the ARE through Nrf2. Activation in kidney Se-GP and G6PD by Aur is also controlled, apparently through Nrf2. Results of the non-parametric Spearman correlation analysis indicated the strong positive correlation between GR and G6PD only in the liver in WT control mice (r=+0.972; p < 0.005) and in the kidney KO control mice (r=+0.958; p < 0.005). The observed low content of GSH in the liver of KO mice indicated an increase in its participation in the neutralization of toxic substances with the absence of induction of GSH-dependent enzymes, such as GST, GR, Se-GP, and Se-iGP. Activation of CytP450 in kidney and spleen and Cat in the liver in KO mice probably revealed another regulatory mechanism for these enzymes. Conclusion: Thereby, obtained results testify that Aur can modulate the activity of genes and antioxidant enzymatic redox-system of GSH, responsible for the control of intracellular reduction-oxidation status.Keywords: auraptene, glutathione, GST, Nrf2
Procedia PDF Downloads 149952 AI/ML Atmospheric Parameters Retrieval Using the “Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN)”
Authors: Thomas Monahan, Nicolas Gorius, Thanh Nguyen
Abstract:
Exoplanet atmospheric parameters retrieval is a complex, computationally intensive, inverse modeling problem in which an exoplanet’s atmospheric composition is extracted from an observed spectrum. Traditional Bayesian sampling methods require extensive time and computation, involving algorithms that compare large numbers of known atmospheric models to the input spectral data. Runtimes are directly proportional to the number of parameters under consideration. These increased power and runtime requirements are difficult to accommodate in space missions where model size, speed, and power consumption are of particular importance. The use of traditional Bayesian sampling methods, therefore, compromise model complexity or sampling accuracy. The Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN) is a deep convolutional generative adversarial network that improves on the previous model’s speed and accuracy. We demonstrate the efficacy of artificial intelligence to quickly and reliably predict atmospheric parameters and present it as a viable alternative to slow and computationally heavy Bayesian methods. In addition to its broad applicability across instruments and planetary types, ARcGAN has been designed to function on low power application-specific integrated circuits. The application of edge computing to atmospheric retrievals allows for real or near-real-time quantification of atmospheric constituents at the instrument level. Additionally, edge computing provides both high-performance and power-efficient computing for AI applications, both of which are critical for space missions. With the edge computing chip implementation, ArcGAN serves as a strong basis for the development of a similar machine-learning algorithm to reduce the downlinked data volume from the Compact Ultraviolet to Visible Imaging Spectrometer (CUVIS) onboard the DAVINCI mission to Venus.Keywords: deep learning, generative adversarial network, edge computing, atmospheric parameters retrieval
Procedia PDF Downloads 171951 A Feminist Historical Institutional Approach and Gender Participation in Queensland Politics
Authors: Liz van Acker, Linda Colley
Abstract:
Political processes are shaped by the gendered culture of parliaments. This paper examines how the institution of parliament has been affected by the changing number of women in politics. In order to understand how and why gender change occurs, the paper employs a feminist historical institutionalism approach. It argues that while it is difficult to change the gendered nature of political institutions, it is possible, from a gender perspective, to understand the processes of change both formally and informally. Increasing women’s representation has been a slow process which has not occurred without political struggles. A broadly defined ‘feminist historical institutionalism’ has critiqued existing approaches to institutions and combined historical institutional analysis with tools of gender to enhance our understanding of institutional processes and change. The paper examines the gendered rules, norms, and practices that influence institutional design choices and processes. Institutions such as Parliament often are able to adjust to women’s entry and absorb them without too much interruption. Exploring the hidden aspects to informal institutions involves identifying unspoken and accepted norms that may guide decision-making – exposing and questioning the gender status quo. This paper examines the representation of women in the Queensland Parliament, Australia. It places the Queensland experience in historical context, as well as in the national and international context. The study is interesting, given that its gender representation has rocketed from one of the worst performing states in 2012 to one of the best performing in 2015 with further improvements in 2017. The state currently has a re-elected female Premier, a female Deputy Premier and a female-dominated cabinet – in fact, Queensland was the first ministry in Australia to have a majority of women in its Cabinet. However, it is unnecessary to dig far below these headlines to see that this is uncharacteristic of its history: progress towards this current position has been slow and patchy. The paper finds that matters such as the glass ceiling and the use of quotas explain women’s recent success in Queensland politics.Keywords: feminist historical institutional approach, glass ceiling, quotas, women’s participation in politics
Procedia PDF Downloads 151950 A Preliminary Kinematic Comparison of Vive and Vicon Systems for the Accurate Tracking of Lumbar Motion
Authors: Yaghoubi N., Moore Z., Van Der Veen S. M., Pidcoe P. E., Thomas J. S., Dexheimer B.
Abstract:
Optoelectronic 3D motion capture systems, such as the Vicon kinematic system, are widely utilized in biomedical research to track joint motion. These systems are considered powerful and accurate measurement tools with <2 mm average error. However, these systems are costly and may be difficult to implement and utilize in a clinical setting. 3D virtual reality (VR) is gaining popularity as an affordable and accessible tool to investigate motor control and perception in a controlled, immersive environment. The HTC Vive VR system includes puck-style trackers that seamlessly integrate into its VR environments. These affordable, wireless, lightweight trackers may be more feasible for clinical kinematic data collection. However, the accuracy of HTC Vive Trackers (3.0), when compared to optoelectronic 3D motion capture systems, remains unclear. In this preliminary study, we compared the HTC Vive Tracker system to a Vicon kinematic system in a simulated lumbar flexion task. A 6-DOF robot arm (SCORBOT ER VII, Eshed Robotec/RoboGroup, Rosh Ha’Ayin, Israel) completed various reaching movements to mimic increasing levels of hip flexion (15°, 30°, 45°). Light reflective markers, along with one HTC Vive Tracker (3.0), were placed on the rigid segment separating the elbow and shoulder of the robot. We compared position measures simultaneously collected from both systems. Our preliminary analysis shows no significant differences between the Vicon motion capture system and the HTC Vive tracker in the Z axis, regardless of hip flexion. In the X axis, we found no significant differences between the two systems at 15 degrees of hip flexion but minimal differences at 30 and 45 degrees, ranging from .047 cm ± .02 SE (p = .03) at 30 degrees hip flexion to .194 cm ± .024 SE (p < .0001) at 45 degrees of hip flexion. In the Y axis, we found a minimal difference for 15 degrees of hip flexion only (.743 cm ± .275 SE; p = .007). This preliminary analysis shows that the HTC Vive Tracker may be an appropriate, affordable option for gross motor motion capture when the Vicon system is not available, such as in clinical settings. Further research is needed to compare these two motion capture systems in different body poses and for different body segments.Keywords: lumbar, vivetracker, viconsystem, 3dmotion, ROM
Procedia PDF Downloads 102949 Effect of Cooking Process on the Antioxidant Activity of Different Variants of Tomato-Based Sofrito
Authors: Ana Beltran Sanahuja, A. Valdés García, Saray Lopez De Pablo Gallego, Maria Soledad Prats Moya
Abstract:
Tomato consumption has greatly increased worldwide in the last few years, mostly due to a growing demand for products like sofrito. In this sense, regular consumption of tomato-based products has been consistently associated with a reduction in the incidence of chronic degenerative diseases. The sofrito is a homemade tomato sauce typical of the Mediterranean area, which contains as main ingredients: tomato, onion, garlic and olive oil. There are also sofrito’s variations by adding other spices which bring at the same time not only color, flavor, smell and or aroma; they also provide medicinal properties, due to their antioxidant power. This protective effect has mainly been attributed to the predominant bioactive compounds present in sofrito, such as lycopene and other carotenoids as well as more than 40 different polyphenols. Regarding the cooking process, it is known that it can modify the properties and the availability of nutrients in sofrito; however, there is not enough information regarding this issue. For this reason, the aim of the present work is to evaluate the cooking effect on the antioxidant capacity of different variants of tomato-based sofrito combined with other spices, through the analysis of total phenols content (TPC) and to evaluate the antioxidant capacity by using the method of free radical 2,2-diphenyl-1-picrylhydrazyl (DPPH). Based on the results obtained, it can be confirmed that the basic sofrito composed of tomato, onion, garlic and olive oil and the sofrito with 1 g of rosemary added, are the ones with the highest content of phenols presenting greater antioxidant power than other industrial sofrito, and that of other variables of sofrito with added thyme or higher amounts of garlic. Moreover, it has been observed that in the elaboration of the tomato-based sofrito, it is possible to cook until 60 minutes, since the cooking process increases the bioavailability of the carotenoids when breaking the cell walls, which weakens the binding forces between the carotenoids and increases the levels of antioxidants present, confirmed both with the TPC and DPPH methods. It can be concluded that the cooking process of different variants of tomato-based sofrito, including spices, can improve the antioxidant capacity. The synergistic effects of different antioxidants may have a greater protective effect; increasing, also, the digestibility of proteins. In addition, the antioxidants help to deactivate the free radicals of diseases such as atherosclerosis, aging, immune suppression, cancer, and diabetes.Keywords: antioxidants, cooking process, phenols sofrito
Procedia PDF Downloads 142948 Geochemical Characteristics and Chemical Toxicity: Appraisal of Groundwater Uranium With Other Geogenic Contaminants in Various Districts of Punjab, India
Authors: Tanu Sharma, Bikramjit Singh Bajwa, Inderpreet Kaur
Abstract:
Monitoring of groundwater in Tarn-Taran, Bathinda, Faridkot and Mansa districts of Punjab state, India is essential where this freshwater resource is being over-exploited causing quality deterioration, groundwater depletion and posing serious threats to residents. The present integrated study was done to appraise quality and suitability of groundwater for drinking/irrigation purposes, hydro-geochemical characteristics, source identification and associated health risks. In the present study, groundwater of various districts of Punjab state was found to be heavily contaminated with As followed by U, thus posing high cancerous risks to local residents via ingestion, along with minor contamination of Fe, Mn, Pb and F−. Most health concerns in the study region were due to the elevated concentrations of arsenic in groundwater with average values of 130 µg L-1, 176 µg L-1, 272 µg L-1 and 651 µg L-1 in Tarn-Taran, Bathinda, Faridkot and Mansa districts, respectively, which is quite high as compared to the safe limit as recommended by BIS i.e. 10 µg L-1. In Tarn-Taran, Bathinda, Faridkot and Mansa districts, average uranium contents were found to be 37 µg L-1, 88 µg L-1, 61 µg L-1 and 104 µg L-1, with 51 %, 74 %, 61 % and 71 % samples, respectively, being above the WHO limit of 30 µg L-1 in groundwater. Further, the quality indices showed that groundwater of study region is suited for irrigation but not appropriate for drinking purposes. Hydro-geochemical studies revealed that most of the collected groundwater samples belonged to Ca2+ - Mg2+ - HCO3- type showing dominance of MgCO3 type which indicates the presence of temporary hardness in groundwater. Rock-water reactions and reverse ion exchange were the predominant factors for controlling hydro-geochemistry in the study region. Dissolution of silicate minerals caused the dominance of Na+ ions in the aquifers of study region. Multivariate statistics revealed that along with geogenic sources, contribution of anthropogenic activities such as injudicious application of agrochemicals and domestic waste discharge was also very significant. The results obtained abolished the myth that uranium is only root cause for large number of cancer patients in study region as arsenic and mercury were also present in groundwater at levels that were of health concern to groundwater.Keywords: uranium, trace elements, multivariate data analysis, risk assessment
Procedia PDF Downloads 73947 Treatment of Premalignant Lesions: Curcumin a Promising Non-Surgical Option
Authors: Heba A. Hazzah, Ragwa M. Farid, Maha M. A. Nasra, Mennatallah Zakria, Magda A. El Massik, Ossama Y. Abdallah
Abstract:
Introduction: Curcumin (Cur) is a polyphenol derived from the herbal remedy and dietary spice turmeric. It possesses diverse anti-inflammatory and anti-cancer properties following oral or topical administration. The buccal delivery of curcumin can be useful for both systemic and local disease treatments such as gingivitis, periodontal diseases, oral carcinomas, and precancerous oral lesions. Despite of its high activity, it suffers a limited application due to its low oral bioavailability, poor aqueous solubility, and instability. Aim: Preparation and characterization of curcumin solid lipid nanoparticles with a high loading capacity into a mucoadhesive gel for buccal application. Methodology: Curcumin was formulated as nanoparticles using different lipids, namely Gelucire 39/01, Gelucire 50/13, Precirol, Compritol, and Polaxomer 407 as a surfactant. The SLN were dispersed in a mucoadhesive gel matrix to be applied to the buccal mucosa. All formulations were evaluated for their content, entrapment efficiency, particle size, in vitro drug dialysis, ex vivo mucoadhesion test, and ex vivo permeation study using chicken buccal mucosa. Clinical evaluation was conducted on 15 cases suffering oral erythroplakia and erosive lichen planus. Results: The results showed high entrapment efficiency reaching almost 90 % using Gelucire 50, the loaded gel with Cur-SLN showed good adhesion property and 25 minutes in vivo residence time. In addition to stability enhancement for the Cur powder. All formulae did not show any drug permeated however, a significant amount of Cur was retained within the mucosal tissue. Pain and lesion sizes were significantly reduced upon topical treatment. Complete healing was observed after 6 weeks of treatment. Conclusion: These results open a room for the pharmaceutical technology to optimize the use of this golden magical powder to get the best out of it. In addition, the lack of local anti-inflammatory compounds with reduced side effects intensifies the importance of studying natural products for this purpose.Keywords: curcumin, erythroplakia, mucoadhesive, pain, solid lipid nanoparticles
Procedia PDF Downloads 451946 Social Business Model: Leveraging Business and Social Value of Social Enterprises
Authors: Miriam Borchardt, Agata M. Ritter, Macaliston G. da Silva, Mauricio N. de Carvalho, Giancarlo M. Pereira
Abstract:
This paper aims to analyze the barriers faced by social enterprises and based on that to propose a social business model framework that helps them to leverage their businesses and the social value delivered. A business model for social enterprises should amplify the value perception including social value for the beneficiaries while generating enough profit to escalate the business. Most of the social value beneficiaries are people from the base of the economic pyramid (BOP) or the ones that have specific needs. Because of this, products and services should be affordable to consumers while solving social needs of the beneficiaries. Developing products and services with social value require tie relationship among the social enterprises and universities, public institutions, accelerators, and investors. Despite being focused on social value and contributing to the beneficiaries’ quality of life as well as contributing to the governments that cannot properly guarantee public services and infrastructure to the BOP, many barriers are faced by the social enterprises to escalate their businesses. This is a work in process and five micro- and small-sized social enterprises in Brazil have been studied: (i) one has developed a kit for cervical uterine cancer detection to allow the BOP women to collect their own material and deliver to a laboratory for U$1,00; (ii) other has developed special products without lactose and it is about 70% cheaper than the traditional brands in the market; (iii) the third has developed prosthesis and orthosis to surplus needs that health public system have not done efficiently; (iv) the fourth has produced and commercialized menstrual panties aiming to reduce the consumption of dischargeable ones while saving money to the consumers; (v) the fifth develops and commercializes clothes from fabric wastes in a partnership with BOP artisans. The preliminary results indicate that the main barriers are related to the public system to recognize these products as public money that could be saved if they bought products from these enterprises instead of the multinational pharmaceutical companies, to the traditional distribution system (e.g. pharmacies) that avoid these products because of the low or non-existing profit, to the difficulty buying raw material in small quantities, to leverage investment by the investors, to cultural barriers and taboos. Interesting strategies to reduce the costs have been observed: some enterprises have focused on simplifying products, others have invested in partnerships with local producers and have developed their machines focusing on process efficiency to leverage investment by the investors.Keywords: base of the pyramid, business model, social business, social business model, social enterprises
Procedia PDF Downloads 102945 Leveraging Natural Language Processing for Legal Artificial Intelligence: A Longformer Approach for Taiwanese Legal Cases
Abstract:
Legal artificial intelligence (LegalAI) has been increasing applications within legal systems, propelled by advancements in natural language processing (NLP). Compared with general documents, legal case documents are typically long text sequences with intrinsic logical structures. Most existing language models have difficulty understanding the long-distance dependencies between different structures. Another unique challenge is that while the Judiciary of Taiwan has released legal judgments from various levels of courts over the years, there remains a significant obstacle in the lack of labeled datasets. This deficiency makes it difficult to train models with strong generalization capabilities, as well as accurately evaluate model performance. To date, models in Taiwan have yet to be specifically trained on judgment data. Given these challenges, this research proposes a Longformer-based pre-trained language model explicitly devised for retrieving similar judgments in Taiwanese legal documents. This model is trained on a self-constructed dataset, which this research has independently labeled to measure judgment similarities, thereby addressing a void left by the lack of an existing labeled dataset for Taiwanese judgments. This research adopts strategies such as early stopping and gradient clipping to prevent overfitting and manage gradient explosion, respectively, thereby enhancing the model's performance. The model in this research is evaluated using both the dataset and the Average Entropy of Offense-charged Clustering (AEOC) metric, which utilizes the notion of similar case scenarios within the same type of legal cases. Our experimental results illustrate our model's significant advancements in handling similarity comparisons within extensive legal judgments. By enabling more efficient retrieval and analysis of legal case documents, our model holds the potential to facilitate legal research, aid legal decision-making, and contribute to the further development of LegalAI in Taiwan.Keywords: legal artificial intelligence, computation and language, language model, Taiwanese legal cases
Procedia PDF Downloads 73944 Ultrastructural Characterization of Lipid Droplets of Rat Hepatocytes after Whole Body 60-Cobalt Gamma Radiation
Authors: Ivna Mororó, Lise P. Labéjof, Stephanie Ribeiro, Kely Almeida
Abstract:
Lipid droplets (LDs) are normally presented in greater or lesser number in the cytoplasm of almost all eukaryotic and some prokaryotic cells. They are independent organelles composed of a lipid ester core and a surface phospholipid monolayer. As a lipid storage form, they provide an available source of energy for the cell. Recently it was demonstrated that they play an important role in other many cellular processes. Among the many unresolved questions about them, it is not even known how LDs is formed, how lipids are recruited to LDs and how they interact with the other organelles. Excess fat in the organism is pathological and often associated with the development of some genetic, hormonal or behavioral diseases. The formation and accumulation of lipid droplets in the cytoplasm can be increased by exogenous physical or chemical agents. It is well known that ionizing radiation affects lipid metabolism resulting in increased lipogenesis in cells, but the details of this process are unknown. To better understand the mode of formation of LDs in liver cells, we investigate their ultrastructural morphology after irradiation. For that, Wistar rats were exposed to whole body gamma radiation from 60-cobalt at various single doses. Samples of the livers were processed for analysis under a conventional transmission electron microscope. We found that when compared to controls, morphological changes in liver cells were evident at the higher doses of radiation used. It was detected a great number of lipid droplets of different sizes and homogeneous content and some of them merged each other. In some cells, it was observed diffused LDs, not limited by a monolayer of phospholipids. This finding suggests that the phospholipid monolayer of the LDs was disrupted by ionizing radiation exposure that promotes lipid peroxydation of endo membranes. Thus the absence of the phospholipid monolayer may prevent the realization of some cellular activities as follow: - lipid exocytosis which requires the merging of LDs membrane with the plasma membrane; - the interaction of LDs with other membrane-bound organelles such as the endoplasmic reticulum (ER), the golgi and mitochondria and; - lipolysis of lipid esters contained in the LDs which requires the presence of enzymes located in membrane-bound organelles as ER. All these impediments can contribute to lipid accumulation in the cytoplasm and the development of diseases such as liver steatosis, cirrhosis and cancer.Keywords: radiobiology, hepatocytes, lipid metabolism, transmission electron microscopy
Procedia PDF Downloads 314943 Increasing Cervical Screening Uptake during the Covid-19 Pandemic at Lakeside Healthcare, Corby, UK
Authors: Devyani Shete, Sudeep Rai
Abstract:
Background: The COVID-19 pandemic has caused one of the highest disruptions to the NHS (National Health Service), especially to the fundamental cervical cancer screening service. To prioritize screening response effectively, it is vital to understand the underlying disease risks amongst groups of women who are less likely to resume their screening/follow up at General Practices. The current government target is to have>=80% of women have an adequate test within the previous 3.5 years (ages 25-49) or 5.5 years (ages 50-64). Aims/Objectives: To increase the number of eligible people aged 25-49 attending cervical screening by 5% at Lakeside Healthcare (a General Practice in Corby). Methods: An online survey was posted on the Lakeside Healthcare website to find out what the barriers towards cervical screening were. It was apparent that patients needed more information catered to their responses. 6 informational videos and a “Cervical Screening Guide” were created for Lakeside patients about cervical screening, which were posted on the Healthcare website. Lakeside also started sending reminder texts to those eligible, with a link to a booking form. Results: On 18th January 2022, 69.7% of patients aged 25-49 years (7207) had an adequate cervical screening test in the last 3.5 years. There were 80 total responders to the online survey. In response to “which of the following are reasons why you have not attended screening”, 30% ticked “I kept putting it off/did not get around to it,” and 13% ticked “I was worried it would be painful or daunting.” In response to “which of the following would make you more likely to book an appointment”, 23% ticked “More detailed explanations of what the risks are if I don’t have screening,” and 20% ticked “I would like more information about the test and what the smear entails.” 10% of responders had previous trauma, whilst 28% of responders said the pandemic had impacted them getting a smear. Survey results were used to carry out interventions to increase smear uptake. On 23rdMarch 2022 (after a 2-month period), 75%of patients aged 25-49 (7119) attended the screening, which was a 5.3% increase from January. Discussion/Conclusion: The survey was vital in carrying out the exact interventions that were required for patients to increase screening uptake, as it is important to know what the populations’ needs are in order to create personalized invitations. This helps to optimise response during a pandemic. A HPV self-sample kit at home could be a popular method of dealing with further outbreaks.Keywords: gynaecology, cervical screening, public health, COVID-19
Procedia PDF Downloads 150942 Democratization, Market Liberalization and the Raise of Vested Interests and Its Impacts on Anti-Corruption Reform in Indonesia
Authors: Ahmad Khoirul Umam
Abstract:
This paper investigates the role of vested interests and its impacts on anti-corruption agenda in Indonesia following the collapse of authoritarian regime in 1998. A pervasive and rampant corruption has been believed as the main cause of the state economy’s fragility. Hence, anti-corruption measures were implemented by applying democratization and market liberalization since the establishment of a consolidated democracy which go hand in hand with a liberal market economy is convinced to be an efficacious prescription for effective anti-corruption. The reform movement has also mandated the establishment of the independent, neutral and professional special anti-corruption agency namely Corruption Eradication Commission (KPK) to more intensify the fight against the systemic corruption. This paper will examine whether these anti-corruption measures have been effective to combat corruption, and investigate to what extend have the anti-corruption efforts, especially those conducted by KPK, been impeded by the emergence of a nexus of vested interests as the side-effect of democratization and market liberalization. Based on interviews with key stakeholders from KPK, other law enforcement agencies, government, prominent scholars, journalists and NGOs in Indonesia, it is found that since the overthrow of Soeharto, anti-corruption movement in the country have become more active and serious. After gradually winning the hearth of people, KPK successfully touched the untouchable corruption perpetrators who were previously protected by political immunity, legal protection and bureaucratic barriers. However, these changes have not necessarily reduced systemic and structural corruption practices. Ironically, intensive and devastating counterattacks were frequently posed by the alignment of business actors, elites of political parties, government, and also law enforcement agencies by hijacking state’s instruments to make KPK deflated, powerless, and surrender. This paper concludes that attempts of democratization, market liberalization and the establishment of anti-corruption agency may have helped Indonesia to reduce corruption. However, it is still difficult to imply that such anti-corruption measures have fostered the more effective anti-corruption works in the newly democratized and weakly regulated liberal economic system.Keywords: vested interests, democratization, market liberalization, anti-corruption, Indonesia
Procedia PDF Downloads 232941 Animations for Teaching Food Chemistry: A Design Approach for Linking Chemistry Theory to Everyday Food
Authors: Paulomi (Polly) Burey, Zoe Lynch
Abstract:
In STEM education, students often have difficulty linking static images and words from textbooks or online resources, to the underlying mechanisms of the topic of study. This can often dissuade some students from pursuing study in the physical and chemical sciences. A growing movement in current day students demonstrates that the YouTube generation feel they learn best from video or dynamic, interactive learning tools, and will seek these out as alternatives to their textbooks and the classroom learning environment. Chemistry, and in particular visualization of molecular structures in everyday materials, can prove difficult to comprehend without significant interaction with the teacher of the content and concepts, beyond the timeframe of a typical class. This can cause a learning hurdle for distance education students, and so it is necessary to provide strong electronic tools and resources to aid their learning. As one of the electronic resources, an animation design approach to link everyday materials to their underlying chemistry would be beneficial for student learning, with the focus here being on food. These animations were designed and storyboarded with a scaling approach and commence with a focus on the food material itself and its component parts. This is followed by animated transitions to its underlying microstructure and identifying features, and finally showing the molecules responsible for these microstructural features. The animation ends with a reverse transition back through the molecular structure, microstructure, all the way back to the original food material, and also animates some reactions that may occur during food processing to demonstrate the purpose of the underlying chemistry and how it affects the food we eat. Using this cyclical approach of linking students’ existing knowledge of food to help guide them to understanding more complex knowledge, and then reinforcing their learning by linking back to their prior knowledge again, enhances student understanding. Food is also an ideal material system for students to interact with, in a hands-on manner to further reinforce their learning. These animations were launched this year in a 2nd year University Food Chemistry course with improved learning outcomes for the cohort.Keywords: chemistry, food science, future pedagogy, STEM Education
Procedia PDF Downloads 160940 Inducing Cryptobiosis State of Tardigrades in Cyanobacteria Synechococcus elongatus for Effective Preservation
Authors: Nilesh Bandekar, Sumita Dasgupta, Luis Alberto Allcahuaman Huaya, Souvik Manna
Abstract:
Cryptobiosis is a dormant state where all measurable metabolic activities are at a halt, allowing an organism to survive in extreme conditions like low temperature (cryobiosis), extreme drought (anhydrobiosis), etc. This phenomenon is observed especially in tardigrades that can retain this state for decades depending on the abiotic environmental conditions. On returning to favorable conditions, tardigrades re-attain a metabolically active state. In this study, cyanobacteria as a model organism are being chosen to induce cryptobiosis for its effective preservation over a long period of time. Preserving cyanobacteria using this strategy will have multiple space applications because of its ability to produce oxygen. In addition, research has shown the survivability of this organism in space for a certain period of time. Few species of cyanobacterial residents of the soil such as Microcoleus, are able to survive in extreme drought as well. This work specifically focuses on Synechococcus elongatus, an endolith cyanobacteria with multiple benefits. It has the capability to produce 25% oxygen in water bodies. It utilizes carbon dioxide to produce oxygen via photosynthesis and also uses carbon dioxide as an energy source to form glucose via the Calvin cycle. There is a fair possibility of initiating cryptobiosis in such an organism by inducing certain proteins extracted from tardigrades such as Heat Shock Proteins (Hsp27 and Hsp30c) and/or hydrophilic Late Embryogenesis Abundant proteins (LEA). Existing methods like cryopreservation are difficult to execute in space keeping in mind their cost and heavy instrumentation. Also, extensive freezing may cause cellular damage. Therefore, cryptobiosis-induced cyanobacteria for its transportation from Earth to Mars as a part of future terraforming missions on Mars will save resources and increase the effectiveness of preservation. Finally, Cyanobacteria species like Synechococcus elongatus can also produce oxygen and glucose on Mars in favorable conditions and holds the key to terraforming Mars.Keywords: cryptobiosis, cyanobacteria, glucose, mars, Synechococcus elongatus, tardigrades
Procedia PDF Downloads 229939 Importance of Detecting Malingering Patients in Clinical Setting
Authors: Sakshi Chopra, Harsimarpreet Kaur, Ashima Nehra
Abstract:
Objectives: Malingering is fabricating or exaggerating the symptoms of mental or physical disorders for a variety of secondary gains or motives, which may include financial compensation; avoiding work; getting lighter criminal sentences; or simply to attract attention or sympathy. Malingering is different from somatization disorder and factitious disorder. The prevalence of malingering is unknown and difficult to determine. In an estimated study in forensic population, it can reach up to 17% cases. But the accuracy of such estimates is questionable as successful malingerers are not detected and thus, not included. Methods: The case study of a 58 years old, right handed, graduate, pre-morbidly working in a national company with reported history of stroke leading to head injury; cerebral infarction/facial palsy and dementia. He was referred for disability certification so that his job position can be transferred to his son as he could not work anymore. A series of Neuropsychological tests were administered. Results: With a mental age of < 2.5 years; social adaptive functioning was overall < 20 showing profound Mental Retardation, less than 1 year social age in abilities of self-help, eating, dressing, locomotion, occupation, communication, self-direction, and socialization; severely impaired verbal and performance ability, 96% impairment in Activities of Daily Living, with an indication of very severe depression. With inconsistent and fluctuating medical findings and problem descriptions to different health professionals forming the board for his disability, it was concluded that this patient was malingering. Conclusions: Even though it can be easily defined, malingering can be very challenging to diagnosis. Cases of malingering impose a substantial economic burden on the health care system and false attribution of malingering imposes a substantial burden of suffering on a significant proportion of the patient population. Timely, tactful diagnosis and management can help ease this patient burden on the healthcare system. Malingering can be detected by only trained mental health professionals in the clinical setting.Keywords: disability, India, malingering, neuropsychological assessment
Procedia PDF Downloads 420938 Evaluation of Different Anticoagulant Effects on Flow Properties of Human Blood Using Falling Needle Rheometer
Authors: Hiroki Tsuneda, Takamasa Suzuki, Hideki Yamamoto, Kimito Kawamura, Eiji Tamura, Katharina Wochner, Roberto Plasenzotti
Abstract:
Flow property of human blood is one of the important factors on the prevention of the circulatory condition such as a high blood pressure, a diabetes mellitus, and a cardiac infarction. However, the measurement of flow property of human blood, especially blood viscosity, is not so easy, because of their coagulation or aggregation behaviors after taking a sample from blood vessel. In the experiment, some kinds of anticoagulant were added into the human blood to avoid its solidification. Anticoagulant used in the blood test has been chosen for each purpose of blood test, for anticoagulant effect on blood is different mechanism for each. So that, there is a problem that the evaluation of measured blood property with different anticoagulant is so difficult. Therefore, it is so important to make clear the difference of anticoagulant effect on the blood property. In the previous work, a compact-size falling needle rheometer (FNR) has been developed in order to measure the flow property of human blood such as a flow curve, an apparent viscosity. It was found that FNR system can apply to a rheometer or a viscometry for various experimental conditions for not only human blood but also mammalians blood. In this study, the measurements of human blood viscosity with different anticoagulant (EDTA and Heparin) were carried out using newly developed FNR system. The effect of anticoagulant on blood viscosity was also tested by using the standard liquid for each. The accuracy on the viscometry was also tested by using the standard liquid for calibrating materials (JS-10, JS-20) and observed data have satisfactory agreement with reference data around 1.0% at 310K. The flow curve of six males and females with different anticoagulant were measured using FNR. In this experiment, EDTA and Heparin were chosen as anticoagulant for blood. Heparin can inhibit the coagulation of human blood by activating the body of anti-thrombin. To examine the effect of human blood viscosity on anticoagulant, flow curve was measured at high shear rate (>350s-1), and apparent viscosity of each person were determined with different anticoagulant. The apparent viscosity of human blood with heparin was 2%-9% higher than that with EDTA. However, the difference of blood viscosity for two anticoagulants for same blood was different for each. Further discussion, we need the consideration of effect on other physical property, such as cellular component and plasma component.Keywords: falling-needle rheometer, human blood, viscosity, anticoagulant
Procedia PDF Downloads 442937 Career Guidance System Using Machine Learning
Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan
Abstract:
Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should properly evaluate their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, Neural Networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable to offer an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills
Procedia PDF Downloads 81936 Investigation on the Effect of Titanium (Ti) Plus Boron (B) Addition to the Mg-AZ31 Alloy in the as Cast and After Extrusion on Its Metallurgical and Mechanical Characteristics
Authors: Adnan I. O. Zaid, Raghad S. Hemeimat
Abstract:
Magnesium - aluminum alloys are versatile materials which are used in manufacturing a number of engineering and industrial parts in the automobile and aircraft industries due to their strength – to –weight -ratio. Against these preferable characteristics, magnesium is difficult to deform at room temperature therefore it is alloyed with other elements mainly Aluminum and Zinc to add some required properties particularly for their high strength - to -weight ratio. Mg and its alloys oxidize rapidly therefore care should be taken during melting or machining them; but they are not fire hazardous. Grain refinement is an important technology to improve the mechanical properties and the micro structure uniformity of the alloys. Grain refinement has been introduced in early fifties; when Cibula showed that the presence of Ti, and Ti+ B, produced a great refining effect in Al. since then it became an industrial practice to grain refine Al. Most of the published work on grain refinement was directed toward grain refining Al and Zinc alloys; however, the effect of the addition of rare earth material on the grain size or the mechanical behavior of Mg alloys has not been previously investigated. This forms the main objective of the research work; where, the effect of Ti addition on the grain size, mechanical behavior, ductility, and the extrusion force & energy consumed in forward extrusion of Mg-AZ31 alloy is investigated and discussed in two conditions, first in the as cast condition and the second after extrusion. It was found that addition of Ti to Mg- AZ31 alloy has resulted in reduction of its grain size by 14%; the reduction in grain size after extrusion was much higher. However the increase in Vicker’s hardness was 3% after the addition of Ti in the as cast condition, and higher values for Vicker’s hardness were achieved after extrusion. Furthermore, an increase in the strength coefficient by 36% was achieved with the addition of Ti to Mg-AZ31 alloy in the as cast condition. Similarly, the work hardening index was also increased indicating an enhancement of the ductility and formability. As for the extrusion process, it was found that the force and energy required for the extrusion were both reduced by 57% and 59% with the addition of Ti.Keywords: cast condition, direct extrusion, ductility, MgAZ31 alloy, super - plasticity
Procedia PDF Downloads 454935 Career Guidance System Using Machine Learning
Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan
Abstract:
Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should evaluate properly their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, neural networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable of offering an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills
Procedia PDF Downloads 71