Search results for: strain fields
706 Structural and Binding Studies of Peptidyl-tRNA Hydrolase from Pseudomonas aeruginosa Provide a Platform for the Structure Based Inhibitor Design against Peptidyl-tRNA Hydrolase
Authors: Sujata Sharma, Avinash Singh, Lovely Gautam, Pradeep Sharma, Mau Sinha, Asha Bhushan, Punit Kaur, Tej P. Singh
Abstract:
Peptidyl-tRNA hydrolase (Pth) Pth is an essential bacterial enzyme that catalyzes the release of free tRNA and peptide moeities from peptidyl tRNAs during stalling of protein synthesis. In order to design inhibitors of Pth from Pseudomonas aeruginosa (PaPth), we have determined the structures of PaPth in its native state and in the bound states with two compounds, amino acylate-tRNA analogue (AAtA) and 5-azacytidine (AZAC). The peptidyl-tRNA hydrolase gene from Pseudomonas aeruginosa was amplified by Phusion High-Fidelity DNA Polymerase using forward and reverse primers, respectively. The E. coliBL21 (λDE3) strain was used for expression of the recombinant peptidyl-tRNA hydrolase from Pseudomonas aeruginosa. The protein was purified using a Ni-NTA superflow column. The crystallization experiments were carried out using hanging drop vapour diffusion method. The crystals diffracted to 1.50 Å resolution. The data were processed using HKL-2000. The polypeptide chain of PaPth consists of 194 amino acid residues from Met1 to Ala194. The centrally located β-structure is surrounded by α-helices from all sides except the side that has entrance to the substrate binding site. The structures of the complexes of PaPth with AAtA and AZAC showed the ligands bound to PaPth in the substrate binding cleft and interacted with protein atoms extensively. The residues that formed intermolecular hydrogen bonds with the atoms of AAtA included Asn12, His22, Asn70, Gly113, Asn116, Ser148, and Glu161 of the symmetry related molecule. The amino acids that were involved in hydrogen bonded interactions in case of AZAC included, His22, Gly113, Asn116, and Ser148. As indicated by fittings of two ligands and the number of interactions made by them with protein atoms, AAtA appears to be a more compatible with the structure of the substrate binding cleft. However, there is a further scope to achieve a better stacking than that of O-tyrosyl moiety because it is not still ideally stacked. These observations about the interactions between the protein and ligands have provided the information about the mode of binding of ligands, nature and number of interactions. This information may be useful for the design of tight inhibitors of Pth enzymes.Keywords: peptidyl tRNA hydrolase, Acinetobacter baumannii, Pth enzymes, O-tyrosyl
Procedia PDF Downloads 431705 Thermal Stability and Electrical Conductivity of Ca₅Mg₄₋ₓMₓ(VO₄)₆ (0 ≤ x ≤ 4) where M = Zn, Ni Measured by Impedance Spectroscopy
Authors: Anna S. Tolkacheva, Sergey N. Shkerin, Kirill G. Zemlyanoi, Olga G. Reznitskikh, Pavel D. Khavlyuk
Abstract:
Calcium oxovanadates with garnet related structure are multifunctional oxides in various fields like photoluminescence, microwave dielectrics, and magneto-dielectrics. For example, vanadate garnets are self-luminescent compounds. They attract attention as RE-free broadband excitation and emission phosphors and are candidate materials for UV-based white light-emitting diodes (WLEDs). Ca₅M₄(VO₄)₆ (M = Mg, Zn, Co, Ni, Mn) compounds are also considered promising for application in microwave devices as substrate materials. However, the relation between their structure, composition and physical/chemical properties remains unclear. Given the above-listed observations, goals of this study are to synthesise Ca₅M₄(VO₄)₆ (M = Mg, Zn, Ni) and to study their thermal and electrical properties. Solid solutions Ca₅Mg₄₋ₓMₓ(VO₄)₆ (0 ≤ x ≤ 4) where M is Zn and Ni have been synthesized by sol-gel method. The single-phase character of the final products was checked by powder X-ray diffraction on a Rigaku D/MAX-2200 X-ray diffractometer using Cu Kα radiation in the 2θ range from 15° to 70°. The dependence of thermal properties on chemical composition of solid solutions was studied using simultaneous thermal analyses (DSC and TG). Thermal analyses were conducted in a Netzch simultaneous analyser STA 449C Jupiter, in Ar atmosphere, in temperature range from 25 to 1100°C heat rate was 10 K·min⁻¹. Coefficients of thermal expansion (CTE) were obtained by dilatometry measurements in air up to 800°C using a Netzsch 402PC dilatometer; heat rate was 1 K·min⁻¹. Impedance spectra were obtained via the two-probe technique with an impedance meter Parstat 2273 in air up to 700°C with the variation of pH₂O from 0.04 to 3.35 kPa. Cation deficiency in Ca and Mg sublattice under the substitution of MgO with ZnO up to 1/6 was observed using Rietveld refinement of the crystal structure. Melting point was found to decrease with x changing from 0 to 4 in Ca₅Mg₄₋ₓMₓ(VO₄)₆ where M is Zn and Ni. It was observed that electrical conductivity does not depend on air humidity. The reported study was funded by the RFBR Grant No. 17–03–01280. Sample attestation was carried out in the Shared Access Centers at the IHTE UB RAS.Keywords: garnet structure, electrical conductivity, thermal expansion, thermal properties
Procedia PDF Downloads 155704 Calibration of Mini TEPC and Measurement of Lineal Energy in a Mixed Radiation Field Produced by Neutrons
Authors: I. C. Cho, W. H. Wen, H. Y. Tsai, T. C. Chao, C. J. Tung
Abstract:
Tissue-equivalent proportional counter (TEPC) is a useful instrument used to measure radiation single-event energy depositions in a subcellular target volume. The quantity of measurements is the microdosimetric lineal energy, which determines the relative biological effectiveness, RBE, for radiation therapy or the radiation-weighting factor, WR, for radiation protection. TEPC is generally used in a mixed radiation field, where each component radiation has its own RBE or WR value. To reduce the pile-up effect during radiotherapy measurements, a miniature TEPC (mini TEPC) with cavity size in the order of 1 mm may be required. In the present work, a homemade mini TEPC with a cylindrical cavity of 1 mm in both the diameter and the height was constructed to measure the lineal energy spectrum of a mixed radiation field with high- and low-LET radiations. Instead of using external radiation beams to penetrate the detector wall, mixed radiation fields were produced by the interactions of neutrons with TEPC walls that contained small plugs of different materials, i.e. Li, B, A150, Cd and N. In all measurements, mini TEPC was placed at the beam port of the Tsing Hua Open-pool Reactor (THOR). Measurements were performed using the propane-based tissue-equivalent gas mixture, i.e. 55% C3H8, 39.6% CO2 and 5.4% N2 by partial pressures. The gas pressure of 422 torr was applied for the simulation of a 1 m diameter biological site. The calibration of mini TEPC was performed using two marking points in the lineal energy spectrum, i.e. proton edge and electron edge. Measured spectra revealed high lineal energy (> 100 keV/m) peaks due to neutron-capture products, medium lineal energy (10 – 100 keV/m) peaks from hydrogen-recoil protons, and low lineal energy (< 10 keV/m) peaks of reactor photons. For cases of Li and B plugs, the high lineal energy peaks were quite prominent. The medium lineal energy peaks were in the decreasing order of Li, Cd, N, A150, and B. The low lineal energy peaks were smaller compared to other peaks. This study demonstrated that internally produced mixed radiations from the interactions of neutrons with different plugs in the TEPC wall provided a useful approach for TEPC measurements of lineal energies.Keywords: TEPC, lineal energy, microdosimetry, radiation quality
Procedia PDF Downloads 470703 Exploring Multimodal Communication: Intersections of Language, Gesture, and Technology
Authors: Rasha Ali Dheyab
Abstract:
In today's increasingly interconnected and technologically-driven world, communication has evolved beyond traditional verbal exchanges. This paper delves into the fascinating realm of multimodal communication, a dynamic field at the intersection of linguistics, gesture studies, and technology. The study of how humans convey meaning through a combination of spoken language, gestures, facial expressions, and digital platforms has gained prominence as our modes of interaction continue to diversify. This exploration begins by examining the foundational theories in linguistics and gesture studies, tracing their historical development and mutual influences. It further investigates the role of nonverbal cues, such as gestures and facial expressions, in augmenting and sometimes even altering the meanings conveyed by spoken language. Additionally, the paper delves into the modern technological landscape, where emojis, GIFs, and other digital symbols have emerged as new linguistic tools, reshaping the ways in which we communicate and express emotions. The interaction between traditional and digital modes of communication is a central focus of this study. The paper investigates how technology has not only introduced new modes of expression but has also influenced the adaptation of existing linguistic and gestural patterns in online discourse. The emergence of virtual reality and augmented reality environments introduces yet another layer of complexity to multimodal communication, offering new avenues for studying how humans navigate and negotiate meaning in immersive digital spaces. Through a combination of literature review, case studies, and theoretical analysis, this paper seeks to shed light on the intricate interplay between language, gesture, and technology in the realm of multimodal communication. By understanding how these diverse modes of expression intersect and interact, we gain valuable insights into the ever-evolving nature of human communication and its implications for fields ranging from linguistics and psychology to human-computer interaction and digital anthropology.Keywords: multimodal communication, linguistics ., gesture studies., emojis., verbal communication., digital
Procedia PDF Downloads 81702 Differences in Production of Knowledge between Internationally Mobile versus Nationally Mobile and Non-Mobile Scientists
Authors: Valeria Aman
Abstract:
The presented study examines the impact of international mobility on knowledge production among mobile scientists and within the sending and receiving research groups. Scientists are relevant to the dynamics of knowledge production because scientific knowledge is mainly characterized by embeddedness and tacitness. International mobility enables the dissemination of scientific knowledge to other places and encourages new combinations of knowledge. It can also increase the interdisciplinarity of research by forming synergetic combinations of knowledge. Particularly innovative ideas can have their roots in related research domains and are sometimes transferred only through the physical mobility of scientists. Diversity among scientists with respect to their knowledge base can act as an engine for the creation of knowledge. It is therefore relevant to study how knowledge acquired through international mobility affects the knowledge production process. In certain research domains, international mobility may be essential to contextualize knowledge and to gain access to knowledge located at distant places. The knowledge production process contingent on the type of international mobility and the epistemic culture of a research field is examined. The production of scientific knowledge is a multi-faceted process, the output of which is mainly published in scholarly journals. Therefore, the study builds upon publication and citation data covered in Elsevier’s Scopus database for the period of 1996 to 2015. To analyse these data, bibliometric and social network analysis techniques are used. A basic analysis of scientific output using publication data, citation data and data on co-authored publications is combined with a content map analysis. Abstracts of publications indicate whether a research stay abroad makes an original contribution methodologically, theoretically or empirically. Moreover, co-citations are analysed to map linkages among scientists and emerging research domains. Finally, acknowledgements are studied that can function as channels of formal and informal communication between the actors involved in the process of knowledge production. The results provide better understanding of how the international mobility of scientists contributes to the production of knowledge, by contrasting the knowledge production dynamics of internationally mobile scientists with those being nationally mobile or immobile. Findings also allow indicating whether international mobility accelerates the production of knowledge and the emergence of new research fields.Keywords: bibliometrics, diversity, interdisciplinarity, international mobility, knowledge production
Procedia PDF Downloads 293701 A Static and Dynamic Slope Stability Analysis of Sonapur
Authors: Rupam Saikia, Ashim Kanti Dey
Abstract:
Sonapur is an intense hilly region on the border of Assam and Meghalaya lying in North-East India and is very near to a seismic fault named as Dauki besides which makes the region seismically active. Besides, these recently two earthquakes of magnitude 6.7 and 6.9 have struck North-East India in January and April 2016. Also, the slope concerned for this study is adjacent to NH 44 which for a long time has been a sole important connecting link to the states of Manipur and Mizoram along with some parts of Assam and so has been a cause of considerable loss to life and property since past decades as there has been several recorded incidents of landslide, road-blocks, etc. mostly during the rainy season which comes into news. Based on this issue this paper reports a static and dynamic slope stability analysis of Sonapur which has been carried out in MIDAS GTS NX. The slope being highly unreachable due to terrain and thick vegetation in-situ test was not feasible considering the current scope available so disturbed soil sample was collected from the site for the determination of strength parameters. The strength parameters were so determined for varying relative density with further variation in water content. The slopes were analyzed considering plane strain condition for three slope heights of 5 m, 10 m and 20 m which were then further categorized based on slope angles 30, 40, 50, 60, and 70 considering the possible extent of steepness. Initially static analysis under dry state was performed then considering the worst case that can develop during rainy season the slopes were analyzed for fully saturated condition along with partial degree of saturation with an increase in the waterfront. Furthermore, dynamic analysis was performed considering the El-Centro Earthquake which had a magnitude of 6.7 and peak ground acceleration of 0.3569g at 2.14 sec for the slope which were found to be safe during static analysis under both dry and fully saturated condition. Some of the conclusions were slopes with inclination above 40 onwards were found to be highly vulnerable for slopes of height 10 m and above even under dry static condition. Maximum horizontal displacement showed an exponential increase with an increase in inclination from 30 to 70. The vulnerability of the slopes was seen to be further increased during rainy season as even slopes of minimal steepness of 30 for height 20 m was seen to be on the verge of failure. Also, during dynamic analysis slopes safe during static analysis were found to be highly vulnerable. Lastly, as a part of the study a comparative study on Strength Reduction Method (SRM) versus Limit Equilibrium Method (LEM) was also carried out and some of the advantages and disadvantages were figured out.Keywords: dynamic analysis, factor of safety, slope stability, strength reduction method
Procedia PDF Downloads 260700 A System Architecture for Hand Gesture Control of Robotic Technology: A Case Study Using a Myo™ Arm Band, DJI Spark™ Drone, and a Staubli™ Robotic Manipulator
Authors: Sebastian van Delden, Matthew Anuszkiewicz, Jayse White, Scott Stolarski
Abstract:
Industrial robotic manipulators have been commonplace in the manufacturing world since the early 1960s, and unmanned aerial vehicles (drones) have only begun to realize their full potential in the service industry and the military. The omnipresence of these technologies in their respective fields will only become more potent in coming years. While these technologies have greatly evolved over the years, the typical approach to human interaction with these robots has not. In the industrial robotics realm, a manipulator is typically jogged around using a teach pendant and programmed using a networked computer or the teach pendant itself via a proprietary software development platform. Drones are typically controlled using a two-handed controller equipped with throttles, buttons, and sticks, an app that can be downloaded to one’s mobile device, or a combination of both. This application-oriented work offers a novel approach to human interaction with both unmanned aerial vehicles and industrial robotic manipulators via hand gestures and movements. Two systems have been implemented, both of which use a Myo™ armband to control either a drone (DJI Spark™) or a robotic arm (Stäubli™ TX40). The methodologies developed by this work present a mapping of armband gestures (fist, finger spread, swing hand in, swing hand out, swing arm left/up/down/right, etc.) to either drone or robot arm movements. The findings of this study present the efficacy and limitations (precision and ergonomic) of hand gesture control of two distinct types of robotic technology. All source code associated with this project will be open sourced and placed on GitHub. In conclusion, this study offers a framework that maps hand and arm gestures to drone and robot arm control. The system has been implemented using current ubiquitous technologies, and these software artifacts will be open sourced for future researchers or practitioners to use in their work.Keywords: human robot interaction, drones, gestures, robotics
Procedia PDF Downloads 159699 Superchaotropicity: Grafted Surface to Probe the Adsorption of Nano-Ions
Authors: Raimoana Frogier, Luc Girard, Pierre Bauduin, Diane Rebiscoul, Olivier Diat
Abstract:
Nano-ions (NIs) are ionic species or clusters of nanometric size. Their low charge density and the delocalization of their charges give special properties to some of NIs belonging to chemical classes of polyoxometalates (POMs) or boron clusters. They have the particularity of interacting non-covalently with neutral hydrated surface or interfaces such as assemblies of surface-active molecules (micelles, vesicles, lyotropic liquid crystals), foam bubbles or emulsion droplets. This makes possible to classify those NIs in the Hofmeister series as superchaotropic ions. The mechanism of adsorption is complex, linked to the simultaneous dehydration of the ion and the molecule or supramolecular assembly with which it can interact, all with an enthalpic gain on the free energy of the system. This interaction process is reversible and is sufficiently pronounced to induce changes in molecular and supramolecular shape or conformation, phase transitions in the liquid phase, all at sub-millimolar ionic concentrations. This new property of some NIs opens up new possibilities for applications in fields as varied as biochemistry for solubilization, recovery of metals of interest by foams in the form of NIs... In order to better understand the physico-chemical mechanisms at the origin of this interaction, we use silicon wafers functionalized by non-ionic oligomers (polyethylene glycol chains or PEG) to study in situ by X-ray reflectivity this interaction of NIs with the grafted chains. This study carried out at ESRF (European Synchrotron Radiation Facility) and has shown that the adsorption of the NIs, such as POMs, has a very fast kinetics. Moreover the distribution of the NIs in the grafted PEG chain layer was quantify. These results are very encouraging and confirm what has been observed on soft interfaces such as micelles or foams. The possibility to play on the density, length and chemical nature of the grafted chains makes this system an ideal tool to provide kinetic and thermodynamic information to decipher the complex mechanisms at the origin of this adsorption.Keywords: adsorption, nano-ions, solid-liquid interface, superchaotropicity
Procedia PDF Downloads 67698 Analysis of Citation Rate and Data Reuse for Openly Accessible Biodiversity Datasets on Global Biodiversity Information Facility
Authors: Nushrat Khan, Mike Thelwall, Kayvan Kousha
Abstract:
Making research data openly accessible has been mandated by most funders over the last 5 years as it promotes reproducibility in science and reduces duplication of effort to collect the same data. There are evidence that articles that publicly share research data have higher citation rates in biological and social sciences. However, how and whether shared data is being reused is not always intuitive as such information is not easily accessible from the majority of research data repositories. This study aims to understand the practice of data citation and how data is being reused over the years focusing on biodiversity since research data is frequently reused in this field. Metadata of 38,878 datasets including citation counts were collected through the Global Biodiversity Information Facility (GBIF) API for this purpose. GBIF was used as a data source since it provides citation count for datasets, not a commonly available feature for most repositories. Analysis of dataset types, citation counts, creation and update time of datasets suggests that citation rate varies for different types of datasets, where occurrence datasets that have more granular information have higher citation rates than checklist and metadata-only datasets. Another finding is that biodiversity datasets on GBIF are frequently updated, which is unique to this field. Majority of the datasets from the earliest year of 2007 were updated after 11 years, with no dataset that was not updated since creation. For each year between 2007 and 2017, we compared the correlations between update time and citation rate of four different types of datasets. While recent datasets do not show any correlations, 3 to 4 years old datasets show weak correlation where datasets that were updated more recently received high citations. The results are suggestive that it takes several years to cumulate citations for research datasets. However, this investigation found that when searched on Google Scholar or Scopus databases for the same datasets, the number of citations is often not the same as GBIF. Hence future aim is to further explore the citation count system adopted by GBIF to evaluate its reliability and whether it can be applicable to other fields of studies as well.Keywords: data citation, data reuse, research data sharing, webometrics
Procedia PDF Downloads 178697 Analyzing Industry-University Collaboration Using Complex Networks and Game Theory
Authors: Elnaz Kanani-Kuchesfehani, Andrea Schiffauerova
Abstract:
Due to the novelty of the nanotechnology science, its highly knowledge intensive content, and its invaluable application in almost all technological fields, the close interaction between university and industry is essential. A possible gap between academic strengths to generate good nanotechnology ideas and industrial capacity to receive them can thus have far-reaching consequences. In order to be able to enhance the collaboration between the two parties, a better understanding of knowledge transfer within the university-industry relationship is needed. The objective of this research is to investigate the research collaboration between academia and industry in Canadian nanotechnology and to propose the best cooperative strategy to maximize the quality of the produced knowledge. First, a network of all Canadian academic and industrial nanotechnology inventors is constructed using the patent data from the USPTO (United States Patent and Trademark Office), and it is analyzed with social network analysis software. The actual level of university-industry collaboration in Canadian nanotechnology is determined and the significance of each group of actors in the network (academic vs. industrial inventors) is assessed. Second, a novel methodology is proposed, in which the network of nanotechnology inventors is assessed from a game theoretic perspective. It involves studying a cooperative game with n players each having at most n-1 decisions to choose from. The equilibrium leads to a strategy for all the players to choose their co-worker in the next period in order to maximize the correlated payoff of the game. The payoffs of the game represent the quality of the produced knowledge based on the citations of the patents. The best suggestion for the next collaborative relationship is provided for each actor from a game theoretic point of view in order to maximize the quality of the produced knowledge. One of the major contributions of this work is the novel approach which combines game theory and social network analysis for the case of large networks. This approach can serve as a powerful tool in the analysis of the strategic interactions of the network actors within the innovation systems and other large scale networks.Keywords: cooperative strategy, game theory, industry-university collaboration, knowledge production, social network analysis
Procedia PDF Downloads 258696 Formula Student Car: Design, Analysis and Lap Time Simulation
Authors: Rachit Ahuja, Ayush Chugh
Abstract:
Aerodynamic forces and moments, as well as tire-road forces largely affects the maneuverability of the vehicle. Car manufacturers are largely fascinated and influenced by various aerodynamic improvements made in formula cars. There is constant effort of applying these aerodynamic improvements in road vehicles. In motor racing, the key differentiating factor in a high performance car is its ability to maintain highest possible acceleration in appropriate direction. One of the main areas of concern in motor racing is balance of aerodynamic forces and stream line the flow of air across the body of the vehicle. At present, formula racing cars are regulated by stringent FIA norms, there are constrains for dimensions of the vehicle, engine capacity etc. So one of the fields in which there is a large scope of improvement is aerodynamics of the vehicle. In this project work, an attempt has been made to design a formula- student (FS) car, improve its aerodynamic characteristics through steady state CFD simulations and simultaneously calculate its lap time. Initially, a CAD model of a formula student car is made using SOLIDWORKS as per the given dimensions and a steady-state external air-flow simulation is performed on the baseline model of the formula student car without any add on device to evaluate and analyze the air-flow pattern around the car and aerodynamic forces using FLUENT Solver. A detailed survey on different add-on devices used in racing application like: - front wing, diffuser, shark pin, T- wing etc. is made and geometric model of these add-on devices are created. These add-on devices are assembled with the baseline model. Steady state CFD simulations are done on the modified car to evaluate the aerodynamic effects of these add-on devices on the car. Later comparison of lap time simulation of the formula student car with and without the add-on devices is done with the help of MATLAB. Aerodynamic performances like: - lift, drag and their coefficients are evaluated for different configuration and design of the add-on devices at different speed of the vehicle. From parametric CFD simulations on formula student car attached with add-on devices, there is a considerable amount of drag and lift force reduction besides streamlining the airflow across the car. The best possible configuration of these add-on devices is obtained from these CFD simulations and also use of these add-on devices have shown an improvement in performance of the car which can be compared by various lap time simulations of the car.Keywords: aerodynamic performance, front wing, laptime simulation, t-wing
Procedia PDF Downloads 197695 Career Guidance System Using Machine Learning
Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan
Abstract:
Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should properly evaluate their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, Neural Networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable to offer an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills
Procedia PDF Downloads 80694 Numerical Analysis of Heat Transfer in Water Channels of the Opposed-Piston Diesel Engine
Authors: Michal Bialy, Marcin Szlachetka, Mateusz Paszko
Abstract:
This paper discusses the CFD results of heat transfer in water channels in the engine body. The research engine was a newly designed Diesel combustion engine. The engine has three cylinders with three pairs of opposed pistons inside. The engine will be able to generate 100 kW mechanical power at a crankshaft speed of 3,800-4,000 rpm. The water channels are in the engine body along the axis of the three cylinders. These channels are around the three combustion chambers. The water channels transfer combustion heat that occurs the cylinders to the external radiator. This CFD research was based on the ANSYS Fluent software and aimed to optimize the geometry of the water channels. These channels should have a maximum flow of heat from the combustion chamber or the external radiator. Based on the parallel simulation research, the boundary and initial conditions enabled us to specify average values of key parameters for our numerical analysis. Our simulation used the average momentum equations and turbulence model k-epsilon double equation. There was also used a real k-epsilon model with a function of a standard wall. The turbulence intensity factor was 10%. The working fluid mass flow rate was calculated for a single typical value, specified in line with the research into the flow rate of automotive engine cooling pumps used in engines of similar power. The research uses a series of geometric models which differ, for instance, in the shape of the cross-section of the channel along the axis of the cylinder. The results are presented as colourful distribution maps of temperature, speed fields and heat flow through the cylinder walls. Due to limitations of space, our paper presents the results on the most representative geometric model only. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK ‘PZL-KALISZ’ S.A. and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.Keywords: Ansys fluent, combustion engine, computational fluid dynamics CFD, cooling system
Procedia PDF Downloads 219693 Design Charts for Strip Footing on Untreated and Cement Treated Sand Mat over Underlying Natural Soft Clay
Authors: Sharifullah Ahmed, Sarwar Jahan Md. Yasin
Abstract:
Shallow foundations on unimproved soft natural soils can undergo a high consolidation and secondary settlement. For low and medium rise building projects on such soil condition, pile foundation may not be cost effective. In such cases an alternative to pile foundations may be shallow strip footings placed on a double layered improved soil system soil. The upper layer of this system is untreated or cement treated compacted sand and underlying layer is natural soft clay. This system will reduce the settlement to an allowable limit. The current research has been conducted with the settlement of a rigid plane-strain strip footing of 2.5 m width placed on the surface of a soil consisting of an untreated or cement treated sand layer overlying a bed of homogeneous soft clay. The settlement of the mentioned shallow foundation has been studied considering both cases with the thicknesses of the sand layer are 0.3 to 0.9 times the width of footing. The response of the clay layer is assumed as undrained for plastic loading stages and drained during consolidation stages. The response of the sand layer is drained during all loading stages. FEM analysis was done using PLAXIS 2D Version 8.0. A natural clay deposit of 15 m thickness and 18 m width has been modeled using Hardening Soil Model, Soft Soil Model, Soft Soil Creep Model, and upper improvement layer has been modeled using only Hardening Soil Model. The groundwater level is at the top level of the clay deposit that made the system fully saturated. Parametric study has been conducted to determine the effect of thickness, density, cementation of the sand mat and density, shear strength of the soft clay layer on the settlement of strip foundation under the uniformly distributed vertical load of varying value. A set of the chart has been established for designing shallow strip footing on the sand mat over thick, soft clay deposit through obtaining the particular thickness of sand mat for particular subsoil parameter to ensure no punching shear failure and no settlement beyond allowable level. Design guideline in the form of non-dimensional charts has been developed for footing pressure equivalent to medium-rise residential or commercial building foundation with strip footing on soft inorganic Normally Consolidated (NC) soil of Bangladesh having void ratio from 1.0 to 1.45.Keywords: design charts, ground improvement, PLAXIS 2D, primary and secondary settlement, sand mat, soft clay
Procedia PDF Downloads 123692 Problem Solving: Process or Product? A Mathematics Approach to Problem Solving in Knowledge Management
Authors: A. Giannakopoulos, S. B. Buckley
Abstract:
Problem solving in any field is recognised as a prerequisite for any advancement in knowledge. For example in South Africa it is one of the seven critical outcomes of education together with critical thinking. As a systematic way to problem solving was initiated in mathematics by the great mathematician George Polya (the father of problem solving), more detailed and comprehensive ways in problem solving have been developed. This paper is based on the findings by the author and subsequent recommendations for further research in problem solving and critical thinking. Although the study was done in mathematics, there is no doubt by now in almost anyone’s mind that mathematics is involved to a greater or a lesser extent in all fields, from symbols, to variables, to equations, to logic, to critical thinking. Therefore it stands to reason that mathematical principles and learning cannot be divorced from any field. In management of knowledge situations, the types of problems are similar to mathematics problems varying from simple to analogical to complex; from well-structured to ill-structured problems. While simple problems could be solved by employees by adhering to prescribed sequential steps (the process), analogical and complex problems cannot be proceduralised and that diminishes the capacity of the organisation of knowledge creation and innovation. The low efficiency in some organisations and the low pass rates in mathematics prompted the author to view problem solving as a product. The authors argue that using mathematical approaches to knowledge management problem solving and treating problem solving as a product will empower the employee through further training to tackle analogical and complex problems. The question the authors asked was: If it is true that problem solving and critical thinking are indeed basic skills necessary for advancement of knowledge why is there so little literature of knowledge management (KM) about them and how they are connected and advance KM?This paper concludes with a conceptual model which is based on general accepted principles of knowledge acquisition (developing a learning organisation), knowledge creation, sharing, disseminating and storing thereof, the five pillars of knowledge management (KM). This model, also expands on Gray’s framework on KM practices and problem solving and opens the doors to a new approach to training employees in general and domain specific areas problems which can be adapted in any type of organisation.Keywords: critical thinking, knowledge management, mathematics, problem solving
Procedia PDF Downloads 596691 Convergence Results of Two-Dimensional Homogeneous Elastic Plates from Truncation of Potential Energy
Authors: Erick Pruchnicki, Nikhil Padhye
Abstract:
Plates are important engineering structures which have attracted extensive research since the 19th century. The subject of this work is statical analysis of a linearly elastic homogenous plate under small deformations. A 'thin plate' is a three-dimensional structure comprising of a small transverse dimension with respect to a flat mid-surface. The general aim of any plate theory is to deduce a two-dimensional model, in terms of mid-surface quantities, to approximately and accurately describe the plate's deformation in terms of mid-surface quantities. In recent decades, a common starting point for this purpose is to utilize series expansion of a displacement field across the thickness dimension in terms of the thickness parameter (h). These attempts are mathematically consistent in deriving leading-order plate theories based on certain a priori scaling between the thickness and the applied loads; for example, asymptotic methods which are aimed at generating leading-order two-dimensional variational problems by postulating formal asymptotic expansion of the displacement fields. Such methods rigorously generate a hierarchy of two-dimensional models depending on the order of magnitude of the applied load with respect to the plate-thickness. However, in practice, applied loads are external and thus not directly linked or dependent on the geometry/thickness of the plate; thus, rendering any such model (based on a priori scaling) of limited practical utility. In other words, the main limitation of these approaches is that they do not furnish a single plate model for all orders of applied loads. Following analogy of recent efforts of deploying Fourier-series expansion to study convergence of reduced models, we propose two-dimensional model(s) resulting from truncation of the potential energy and rigorously prove the convergence of these two-dimensional plate models to the parent three-dimensional linear elasticity with increasing truncation order of the potential energy.Keywords: plate theory, Fourier-series expansion, convergence result, Legendre polynomials
Procedia PDF Downloads 113690 Career Guidance System Using Machine Learning
Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan
Abstract:
Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should evaluate properly their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, neural networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable of offering an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills
Procedia PDF Downloads 70689 Small-Group Case-Based Teaching: Effects on Student Achievement, Critical Thinking, and Attitude toward Chemistry
Authors: Reynante E. Autida, Maria Ana T. Quimbo
Abstract:
The chemistry education curriculum provides an excellent avenue where students learn the principles and concepts in chemistry and at the same time, as a central science, better understand related fields. However, the teaching approach used by teachers affects student learning. Cased-based teaching (CBT) is one of the various forms of inductive method. The teacher starts with specifics then proceeds to the general principles. The students’ role in inductive learning shifts from being passive in the traditional approach to being active in learning. In this paper, the effects of Small-Group Case-Based Teaching (SGCBT) on college chemistry students’ achievement, critical thinking, and attitude toward chemistry including the relationships between each of these variables were determined. A quasi-experimental counterbalanced design with pre-post control group was used to determine the effects of SGCBT on Engineering students of four intact classes (two treatment groups and two control groups) in one of the State Universities in Mindanao. The independent variables are the type of teaching approach (SGCBT versus pure lecture-discussion teaching or PLDT) while the dependent variables are chemistry achievement (exam scores) and scores in critical thinking and chemistry attitude. Both Analysis of Covariance (ANCOVA) and t-tests (within and between groups and gain scores) were used to compare the effects of SGCBT versus PLDT on students’ chemistry achievement, critical thinking, and attitude toward chemistry, while Pearson product-moment correlation coefficients were calculated to determine the relationships between each of the variables. Results show that the use of SGCBT fosters positive attitude toward chemistry and provides some indications as well on improved chemistry achievement of students compared with PLDT. Meanwhile, the effects of PLDT and SGCBT on critical thinking are comparable. Furthermore, correlational analysis and focus group interviews indicate that the use of SGCBT not only supports development of positive attitude towards chemistry but also improves chemistry achievement of students. Implications are provided in view of the recent findings on SGCBT and topics for further research are presented as well.Keywords: case-based teaching, small-group learning, chemistry cases, chemistry achievement, critical thinking, chemistry attitude
Procedia PDF Downloads 297688 3D Interpenetrated Network Based on 1,3-Benzenedicarboxylate and 1,2-Bis(4-Pyridyl) Ethane
Authors: Laura Bravo-García, Gotzone Barandika, Begoña Bazán, M. Karmele Urtiaga, Luis M. Lezama, María I. Arriortua
Abstract:
Solid coordination networks (SCNs) are materials consisting of metal ions or clusters that are linked by polyfunctional organic ligands and can be designed to form tridimensional frameworks. Their structural features, as for example high surface areas, thermal stability, and in other cases large cavities, have opened a wide range of applications in fields like drug delivery, host-guest chemistry, biomedical imaging, chemical sensing, heterogeneous catalysis and others referred to greenhouse gases storage or even separation. In this sense, the use of polycarboxylate anions and dipyridyl ligands is an effective strategy to produce extended structures with the needed characteristics for these applications. In this context, a novel compound, [Cu4(m-BDC)4(bpa)2DMF]•DMF has been obtained by microwave synthesis, where m-BDC is 1,3-benzenedicarboxylate and bpa 1,2-bis(4-pyridyl)ethane. The crystal structure can be described as a three dimensional framework formed by two equal, interpenetrated networks. Each network consists of two different CuII dimers. Dimer 1 have two coppers with a square pyramidal coordination, and dimer 2 have one with a square pyramidal coordination and other with octahedral one, the last dimer is unique in literature. Therefore, the combination of both type of dimers is unprecedented. Thus, benzenedicarboxylate ligands form sinusoidal chains between the same type of dimers, and also connect both chains forming these layers in the (100) plane. These layers are connected along the [100] direction through the bpa ligand, giving rise to a 3D network with 10 Å2 voids in average. However, the fact that there are two interpenetrated networks results in a significant reduction of the available volume. Structural analysis was carried out by means of single crystal X-ray diffraction and IR spectroscopy. Thermal and magnetic properties have been measured by means of thermogravimetry (TG), X-ray thermodiffractometry (TDX), and electron paramagnetic resonance (EPR). Additionally, CO2 and CH4 high pressure adsorption measurements have been carried out for this compound.Keywords: gas adsorption, interpenetrated networks, magnetic measurements, solid coordination network (SCN), thermal stability
Procedia PDF Downloads 324687 Study of the Montmorillonite Effect on PET/Clay and PEN/Clay Nanocomposites
Authors: F. Zouai, F. Z. Benabid, S. Bouhelal, D. Benachour
Abstract:
Nanocomposite polymer / clay are relatively important area of research. These reinforced plastics have attracted considerable attention in scientific and industrial fields because a very small amount of clay can significantly improve the properties of the polymer. The polymeric matrices used in this work are two saturated polyesters ie polyethylene terephthalate (PET) and polyethylene naphthalate (PEN).The success of processing compatible blends, based on poly(ethylene terephthalate) (PET)/ poly(ethylene naphthalene) (PEN)/clay nanocomposites in one step by reactive melt extrusion is described. Untreated clay was first purified and functionalized ‘in situ’ with a compound based on an organic peroxide/ sulfur mixture and (tetramethylthiuram disulfide) as the activator for sulfur. The PET and PEN materials were first separately mixed in the molten state with functionalized clay. The PET/4 wt% clay and PEN/7.5 wt% clay compositions showed total exfoliation. These compositions, denoted nPET and nPEN, respectively, were used to prepare new n(PET/PEN) nanoblends in the same mixing batch. The n(PET/PEN) nanoblends were compared to neat PET/PEN blends. The blends and nanocomposites were characterized using various techniques. Microstructural and nanostructural properties were investigated. Fourier transform infrared spectroscopy (FTIR) results showed that the exfoliation of tetrahedral clay nanolayers is complete and the octahedral structure totally disappears. It was shown that total exfoliation, confirmed by wide angle X-ray scattering (WAXS) measurements, contributes to the enhancement of impact strength and tensile modulus. In addition, WAXS results indicated that all samples are amorphous. The differential scanning calorimetry (DSC) study indicated the occurrence of one glass transition temperature Tg, one crystallization temperature Tc and one melting temperature Tm for every composition. This was evidence that both PET/PEN and nPET/nPEN blends are compatible in the entire range of compositions. In addition, the nPET/nPEN blends showed lower Tc and higher Tm values than the corresponding neat PET/PEN blends. In conclusion, the results obtained indicate that n(PET/PEN) blends are different from the pure ones in nanostructure and physical behavior.Keywords: blends, exfoliation, DRX, DSC, montmorillonite, nanocomposites, PEN, PET, plastograph, reactive melt-mixing
Procedia PDF Downloads 298686 Magnetic Bio-Nano-Fluids for Hyperthermia
Authors: Z. Kolacinski, L. Szymanski. G. Raniszewski, D. Koza, L. Pietrzak
Abstract:
Magnetic Bio-Nano-Fluid (BNF) can be composed of a buffer fluid such as plasma and magnetic nanoparticles such as iron, nickel, cobalt and their oxides. However iron is one of the best elements for magnetization by electromagnetic radiation. It can be used as a tool for medical diagnosis and treatment. Radio frequency (RF) radiation is able to heat iron nanoparticles due to magnetic hysteresis. Electromagnetic heating of iron nanoparticles and ferro-fluids BNF can be successfully used for non-invasive thermal ablation of cancer cells. Moreover iron atoms can be carried by carbon nanotubes (CNTs) if iron is used as catalyst for CNTs synthesis. Then CNTs became the iron containers and they screen the iron content against oxidation. We will present a method of CNTs addressing to the required cells. For thermal ablation of cancer cells we use radio frequencies for which the interaction with human body should be limited to minimum. Generally, the application of RF energy fields for medical treatment is justified by deep tissue penetration. The highly iron doped CNTs as the carriers creating magnetic fluid will be presented. An excessive catalyst injection method using electrical furnace and microwave plasma reactor will be presented. This way it is possible to grow the Fe filled CNTs on a moving surface in continuous synthesis process. This also allows producing uniform carpet of the Fe filled CNTs carriers. For the experimental work targeted to cell ablation we used RF generator to measure the increase in temperature for some samples like: solution of Fe2O3 in BNF which can be plasma-like buffer, solutions of pure iron of different concentrations in plasma-like buffer and in buffer used for a cell culture, solutions of carbon nanotubes (MWCNTs) of different concentrations in plasma-like buffer and in buffer used for a cell culture. Then the targeted therapies which can be effective if the carriers are able to distinguish the difference between cancerous and healthy cell’s physiology are considered. We have developed an approach based on ligand-receptor or antibody-antigen interactions for the case of colon cancer.Keywords: cancer treatment, carbon nano tubes, drag delivery, hyperthermia, iron
Procedia PDF Downloads 413685 Simulations to Predict Solar Energy Potential by ERA5 Application at North Africa
Authors: U. Ali Rahoma, Nabil Esawy, Fawzia Ibrahim Moursy, A. H. Hassan, Samy A. Khalil, Ashraf S. Khamees
Abstract:
The design of any solar energy conversion system requires the knowledge of solar radiation data obtained over a long period. Satellite data has been widely used to estimate solar energy where no ground observation of solar radiation is available, yet there are limitations on the temporal coverage of satellite data. Reanalysis is a “retrospective analysis” of the atmosphere parameters generated by assimilating observation data from various sources, including ground observation, satellites, ships, and aircraft observation with the output of NWP (Numerical Weather Prediction) models, to develop an exhaustive record of weather and climate parameters. The evaluation of the performance of reanalysis datasets (ERA-5) for North Africa against high-quality surface measured data was performed using statistical analysis. The estimation of global solar radiation (GSR) distribution over six different selected locations in North Africa during ten years from the period time 2011 to 2020. The root means square error (RMSE), mean bias error (MBE) and mean absolute error (MAE) of reanalysis data of solar radiation range from 0.079 to 0.222, 0.0145 to 0.198, and 0.055 to 0.178, respectively. The seasonal statistical analysis was performed to study seasonal variation of performance of datasets, which reveals the significant variation of errors in different seasons—the performance of the dataset changes by changing the temporal resolution of the data used for comparison. The monthly mean values of data show better performance, but the accuracy of data is compromised. The solar radiation data of ERA-5 is used for preliminary solar resource assessment and power estimation. The correlation coefficient (R2) varies from 0.93 to 99% for the different selected sites in North Africa in the present research. The goal of this research is to give a good representation for global solar radiation to help in solar energy application in all fields, and this can be done by using gridded data from European Centre for Medium-Range Weather Forecasts ECMWF and producing a new model to give a good result.Keywords: solar energy, solar radiation, ERA-5, potential energy
Procedia PDF Downloads 211684 Numerical Study of the Breakdown of Surface Divergence Based Models for Interfacial Gas Transfer Velocity at Large Contamination Levels
Authors: Yasemin Akar, Jan G. Wissink, Herlina Herlina
Abstract:
The effect of various levels of contamination on the interfacial air–water gas transfer velocity is studied by Direct Numerical Simulation (DNS). The interfacial gas transfer is driven by isotropic turbulence, introduced at the bottom of the computational domain, diffusing upwards. The isotropic turbulence is generated in a separate, concurrently running the large-eddy simulation (LES). The flow fields in the main DNS and the LES are solved using fourth-order discretisations of convection and diffusion. To solve the transport of dissolved gases in water, a fifth-order-accurate WENO scheme is used for scalar convection combined with a fourth-order central discretisation for scalar diffusion. The damping effect of the surfactant contamination on the near surface (horizontal) velocities in the DNS is modelled using horizontal gradients of the surfactant concentration. An important parameter in this model, which corresponds to the level of contamination, is ReMa⁄We, where Re is the Reynolds number, Ma is the Marangoni number, and We is the Weber number. It was previously found that even small levels of contamination (ReMa⁄We small) lead to a significant drop in the interfacial gas transfer velocity KL. It is known that KL depends on both the Schmidt number Sc (ratio of the kinematic viscosity and the gas diffusivity in water) and the surface divergence β, i.e. K_L∝√(β⁄Sc). Previously it has been shown that this relation works well for surfaces with low to moderate contamination. However, it will break down for β close to zero. To study the validity of this dependence in the presence of surface contamination, simulations were carried out for ReMa⁄We=0,0.12,0.6,1.2,6,30 and Sc = 2, 4, 8, 16, 32. First, it will be shown that the scaling of KL with Sc remains valid also for larger ReMa⁄We. This is an important result that indicates that - for various levels of contamination - the numerical results obtained at low Schmidt numbers are also valid for significantly higher and more realistic Sc. Subsequently, it will be shown that - with increasing levels of ReMa⁄We - the dependency of KL on β begins to break down as the increased damping of near surface fluctuations results in an increased damping of β. Especially for large levels of contamination, this damping is so severe that KL is found to be underestimated significantly.Keywords: contamination, gas transfer, surfactants, turbulence
Procedia PDF Downloads 300683 Computational Linguistic Implications of Gender Bias: Machines Reflect Misogyny in Society
Authors: Irene Yi
Abstract:
Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Computational linguistics is a growing field dealing with such issues of data collection for technological development. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Computational analysis on such linguistic data is used to find patterns of misogyny. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately.Keywords: computational analysis, gendered grammar, misogynistic language, neural networks
Procedia PDF Downloads 119682 Synthesis, Characterization and Rheological Properties of Boronoxide, Polymer Nanocomposites
Authors: Mehmet Doğan, Mahir Alkan, Yasemin Turhan, Zürriye Gündüz, Pinar Beyli, Serap Doğan
Abstract:
Advances and new discoveries in the field of the material science on the basis of technological developments have played an important role. Today, material science is branched the lower branches such as metals, nonmetals, chemicals, polymers. The polymeric nano composites have found a wide application field as one of the most important among these groups. Many polymers used in the different fields of the industry have been desired to improve the thermal stability. One of the ways to improve this property of the polymers is to form the nano composite products of them using different fillers. There are many using area of boron compounds and is increasing day by day. In order to the further increasing of the variety of using area of boron compounds and industrial importance, it is necessary to synthesis of nano-products and to find yourself new application areas of these products. In this study, PMMA/boronoxide nano composites were synthesized using solution intercalation, polymerization and melting methods; and PAA/boronoxide nano composites using solution intercalation method. Furthermore, rheological properties of nano composites synthesed according to melting method were also studied. Nano composites were characterized by XRD, FTIR-ATR, DTA/TG, BET, SEM, and TEM instruments. The effects of filler material amount, solvent types and mediating reagent on the thermal stability of polymers were investigated. In addition, the rheological properties of PMMA/boronoxide nano composites synthesized by melting method were investigated using High Pressure Capillary Rheometer. XRD analysis showed that boronoxide was dispersed in polymer matrix; FTIR-ATR that there were interactions with boronoxide between PAA and PMMA; and TEM that boronoxide particles had spherical structure, and dispersed in nano sized dimension in polymer matrix; the thermal stability of polymers was increased with the adding of boronoxide in polymer matrix; the decomposition mechanism of PAA was changed. From rheological measurements, it was found that PMMA and PMMA/boronoxide nano composites exhibited non-Newtonian, pseudo-plastic, shear thinning behavior under all experimental conditions.Keywords: boronoxide, polymer, nanocomposite, rheology, characterization
Procedia PDF Downloads 433681 Biological Hazards and Laboratory inflicted Infections in Sub-Saharan Africa
Authors: Godfrey Muiya Mukala
Abstract:
This research looks at an array of fields in Sub-Saharan Africa comprising agriculture, food enterprises, medicine, organisms genetically modified, microbiology, and nanotechnology that can be gained from biotechnological research and development. Findings into dangerous organisms, mainly bacterial germs, rickettsia, fungi, parasites, or organisms that are genetically engineered, have immensely posed questions attributed to the biological danger they bring forth to human beings and the environment because of their uncertainties. In addition, the recurrence of previously managed diseases or the inception of new diseases are connected to biosafety challenges, especially in rural set-ups in low and middle-income countries. Notably, biotechnology laboratories are required to adopt biosafety measures to protect their workforce, community, environment, and ecosystem from unforeseen materials and organisms. Sensitization and inclusion of educational frameworks for laboratory workers are essential to acquiring a solid knowledge of harmful biological agents. This is in addition to human pathogenicity, susceptibility, and epidemiology to the biological data used in research and development. This article reviews and analyzes research intending to identify the proper implementation of universally accepted practices in laboratory safety and biological hazards. This research identifies ideal microbiological methods, adequate containment equipment, sufficient resources, safety barriers, specific training, and education of the laboratory workforce to decrease and contain biological hazards. Subsequently, knowledge of standardized microbiological techniques and processes, in addition to the employment of containment facilities, protective barriers, and equipment, is far-reaching in preventing occupational infections. Similarly, reduction of risks and prevention may be attained by training, education, and research on biohazards, pathogenicity, and epidemiology of the relevant microorganisms. In this technique, medical professionals in rural setups may adopt the knowledge acquired from the past to project possible concerns in the future.Keywords: sub-saharan africa, biotechnology, laboratory, infections, health
Procedia PDF Downloads 77680 Polymer Flooding: Chemical Enhanced Oil Recovery Technique
Authors: Abhinav Bajpayee, Shubham Damke, Rupal Ranjan, Neha Bharti
Abstract:
Polymer flooding is a dramatic improvement in water flooding and quickly becoming one of the EOR technologies. Used for improving oil recovery. With the increasing energy demand and depleting oil reserves EOR techniques are becoming increasingly significant .Since most oil fields have already begun water flooding, chemical EOR technique can be implemented by using fewer resources than any other EOR technique. Polymer helps in increasing the viscosity of injected water thus reducing water mobility and hence achieves a more stable displacement .Polymer flooding helps in increasing the injection viscosity as has been revealed through field experience. While the injection of a polymer solution improves reservoir conformance the beneficial effect ceases as soon as one attempts to push the polymer solution with water. It is most commonly applied technique because of its higher success rate. In polymer flooding, a water-soluble polymer such as Polyacrylamide is added to the water in the water flood. This increases the viscosity of the water to that of a gel making the oil and water greatly improving the efficiency of the water flood. It also improves the vertical and areal sweep efficiency as a consequence of improving the water/oil mobility ratio. Polymer flooding plays an important role in oil exploitation, but around 60 million ton of wastewater is produced per day with oil extraction together. Therefore the treatment and reuse of wastewater becomes significant which can be carried out by electro dialysis technology. This treatment technology can not only decrease environmental pollution, but also achieve closed-circuit of polymer flooding wastewater during crude oil extraction. There are three potential ways in which a polymer flood can make the oil recovery process more efficient: (1) through the effects of polymers on fractional flow, (2) by decreasing the water/oil mobility ratio, and (3) by diverting injected water from zones that have been swept. It has also been suggested that the viscoelastic behavior of polymers can improve displacement efficiency Polymer flooding may also have an economic impact because less water is injected and produced compared with water flooding. In future we need to focus on developing polymers that can be used in reservoirs of high temperature and high salinity, applying polymer flooding in different reservoir conditions and also combine polymer with other processes (e.g., surfactant/ polymer flooding).Keywords: fractional flow, polymer, viscosity, water/oil mobility ratio
Procedia PDF Downloads 400679 Quality Assessment of SSRU Program in Education
Authors: Rossukhon Makaramani, Supanan Sittilerd, Wipada Prasarnsaph
Abstract:
The study aimed to 1) examine management status of a Program in Education at the Faculty of Education, Suan Sunandha Rajabhat University (SSRU); 2) determine main components, indicators and criteria for constructing quality assessment framework; 3) assess quality of a SSRU Program in Education; and 4) provide recommendations to promote academic excellence. The program to be assessed was Bachelor of Education Program in Education (5 years), Revised Version 2009. Population and samples were stakeholders involving implementation of this program during an academic year 2012. Results were: 1) Management status of the Program in Education showed that the Faculty of Education depicted good level (4.20) in the third cycle of external quality assessment by the Office for National Education Standards and Quality Assessment (ONESQA). There were 1,192 students enrolling in the program, divided into 5 major fields of study. There were 50 faculty members, 37 holding master’s degrees and 13 holding doctorate degrees. Their academic position consisted of 35 lecturers, 10 assistant professors, and 5 associate professors. For program management, there was a committee of 5 members for the program and also a committee of 4 or 5 members for each major field of study. Among the faculty members, 41 persons taught in this program. The ratio between faculty and student was 1:26. The result of 2013 internal quality assessment indicated that system and mechanism of the program development and management was at fair level. However, the overall result yielded good level either by criteria of the Office of Higher Education Commission (4.29) or the NESQA (4.37); 2) Framework for assessing the quality of the program consisted of 4 dimensions and 15 indicators; 3) Assessment of the program yielded Good level of quality (4.04); 4) Recommendations to promote academic excellence included management and development of the program focusing on teacher reform toward highly recognized profession; cultivation of values, moral, ethics, and spirits of being a teacher; construction of specialized programs; development of faculty potentials; enhancement of the demonstration school’s readiness level; and provision of dormitories for learning.Keywords: quality assessment, education program, Suan Sunandha Rajabhat University, academic excellence
Procedia PDF Downloads 295678 Study of COVID-19 Intensity Correlated with Specific Biomarkers and Environmental Factors
Authors: Satendra Pal Singh, Dalip Kr. Kakru, Jyoti Mishra, Rajesh Thakur, Tarana Sarwat
Abstract:
COVID-19 is still an intrigue as far as morbidity or mortality is concerned. The rate of recovery varies from person to person, & it depends upon the accessibility of the healthcare system and the roles played by the physicians and caregivers. It is envisaged that with the passage of time, people would become immune to this virus, and those who are vulnerable would sustain themselves with the help of vaccines. The proposed study deals with the severeness of COVID-19 is associated with some specific biomarkers linked to correlate age and gender. We will be assessing the overall homeostasis of the persons who were affected by the coronavirus infection and also of those who recovered from it. Some people show more severe effects, while others show very mild symptoms, however, they show low CT values. Thus far, it is unclear why the new strain of Covid has different effects on different people in terms of age, gender, and ABO blood typing. According to data, the fatality rate with heart disease was 10.5 percent, 7.3 percent were diabetic, and 6 percent who are already infected from other comorbidities. However, some COVID-19 cases are worse than others & it is not fully explainable as of date. Overall data show that the ABO blood group is effective or prone to the risk of SARS-COV2 infection, while another study also shows the phenotypic effects of the blood group related to covid. It is an accepted fact that females have more strong immune systems than males, which may be related to the fact that females have two ‘X’ chromosomes, which might contain a more effective immunity booster gene on the X chromosome, and are capable to protect the female. Also specific sex hormones also induce a better immune response in a specific gender. This calls for in-depth analysis to be able to gain insight into this dilemma. COVID-19 is still not fully characterized, and thus we are not very familiar with its biology, mode of infection, susceptibility, and overall viral load in the human body. How many virus particles are needed to infect a person? How, then, comorbidity contribute to coronavirus infection? Since the emergence of this virus in 2020, a large number of papers have been published, and seemingly, vaccines have been prepared. But still, a large number of questions remain unanswered. The proneness of humans for infection by covid-19 needs to be established to be able to develop a better strategy to fight this virus. Our study will be on the Impact of demography on the Severity of covid-19 infection & at the same time, will look into gender-specific sensitivity of Covid-19 and the Operational variation of different biochemical markers in Covid-19 positive patients. Besides, we will be studying the co-relation, if any, of COVID severity & ABO Blood group type and the occurrence of the most common blood group type amongst positive patience.Keywords: coronavirus, ABO blood group, age, gender
Procedia PDF Downloads 98677 Investigation of Different Surface Oxidation Methods on Pyrolytic Carbon
Authors: Lucija Pustahija, Christine Bandl, Wolfgang Kern, Christian Mitterer
Abstract:
Concerning today´s ecological demands, producing reliable materials from sustainable resources is a continuously developing topic. Such an example is the production of carbon materials via pyrolysis of natural gases or biomass. The amazing properties of pyrolytic carbon are utilized in various fields, where in particular the application in building industry is a promising way towards the utilization of pyrolytic carbon and composites based on pyrolytic carbon. For many applications, surface modification of carbon is an important step in tailoring its properties. Therefore, in this paper, an investigation of different oxidation methods was performed to prepare the carbon surface before functionalizing it with organosilanes, which act as coupling agents for epoxy and polyurethane resins. Made in such a way, a building material based on carbon composites could be used as a lightweight, durable material that can be applied where water or air filtration / purification is needed. In this work, both wet and dry oxidation were investigated. Wet oxidation was first performed in solutions of nitric acid (at 120 °C and 150 °C) followed by oxidation in hydrogen peroxide (80 °C) for 3 and 6 h. Moreover, a hydrothermal method (under oxygen gas) in autoclaves was investigated. Dry oxidation was performed under plasma and corona discharges, using different power values to elaborate optimum conditions. Selected samples were then (in preliminary experiments) subjected to a silanization of the surface with amino and glycidoxy organosilanes. The functionalized surfaces were examined by X-ray photon spectroscopy and Fourier transform infrared spectroscopy spectroscopy, and by scanning electron microscopy. The results of wet and dry oxidation methods indicated that the creation of functionalities was influenced by temperature, the concentration of the reagents (and gases) and the duration of the treatment. Sequential oxidation in aq. HNO₃ and H₂O₂ results in a higher content of oxygen functionalities at lower concentrations of oxidizing agents, when compared to oxidizing the carbon with concentrated nitric acid. Plasma oxidation results in non-permanent functionalization on the carbon surface, by which it´s necessary to find adequate parameters of oxidation treatments that could enable longer stability of functionalities. Results of the functionalization of the carbon surfaces with organosilanes will be presented as well.Keywords: building materials, dry oxidation, organosilanes, pyrolytic carbon, resins, surface functionalization, wet oxidation
Procedia PDF Downloads 100