Search results for: propagation of electromagnetic fields
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3362

Search results for: propagation of electromagnetic fields

572 Evaluation of Double Displacement Process via Gas Dumpflood from Multiple Gas Reservoirs

Authors: B. Rakjarit, S. Athichanagorn

Abstract:

Double displacement process is a method in which gas is injected at an updip well to displace the oil bypassed by waterflooding operation from downdip water injector. As gas injection is costly and a large amount of gas is needed, gas dump-flood from multiple gas reservoirs is an attractive alternative. The objective of this paper is to demonstrate the benefits of the novel approach of double displacement process via gas dump-flood from multiple gas reservoirs. A reservoir simulation model consisting of a dipping oil reservoir and several underlying layered gas reservoirs was constructed in order to investigate the performance of the proposed method. Initially, water was injected via the downdip well to displace oil towards the producer located updip. When the water cut at the producer became high, the updip well was shut in and perforated in the gas zones in order to dump gas into the oil reservoir. At this point, the downdip well was open for production. In order to optimize oil recovery, oil production and water injection rates and perforation strategy on the gas reservoirs were investigated for different numbers of gas reservoirs having various depths and thicknesses. Gas dump-flood from multiple gas reservoirs can help increase the oil recovery after implementation of waterflooding upto 10%. Although the amount of additional oil recovery is slightly lower than the one obtained in conventional double displacement process, the proposed process requires a small completion cost of the gas zones and no operating cost while the conventional method incurs high capital investment in gas compression facility and high-pressure gas pipeline and additional operating cost. From the simulation study, oil recovery can be optimized by producing oil at a suitable rate and perforating the gas zones with the right strategy which depends on depths, thicknesses and number of the gas reservoirs. Conventional double displacement process has been studied and successfully implemented in many fields around the world. However, the method of dumping gas into the oil reservoir instead of injecting it from surface during the second displacement process has never been studied. The study of this novel approach will help a practicing engineer to understand the benefits of such method and can implement it with minimum cost.

Keywords: gas dump-flood, multi-gas layers, double displacement process, reservoir simulation

Procedia PDF Downloads 408
571 The Impact of E-commerce to Improve of Banking Services

Authors: Azzi Mohammed Amin

Abstract:

Summary: This note aims to demonstrate the impact that comes out of electronic commerce to improve the quality of banking services and to answer the questions raised in the problem; it also aims to find out the methods applied in the banks to improve the quality of banking. And it identified a conceptual framework for electronic commerce and electronic banking. In addition, the inclusion of case study includes the Algerian Popular Credit Bank to measure the impact of electronic commerce on the quality of banking services. Has been focusing on electronic banking services as a field of modern knowledge, including fields characterized by high module in content and content, where banking management concluded that the service and style of electronic submission is the only area to compete and improve their quality. After studying the exploration of some of the banks operating in Algeria, and concluded that the majority relies sites, especially on the Internet, to introduce themselves and their affiliates as well as the definition of customer coverage for traditional and electronic, which are still at the beginning of the road where only some plastic cards, e-Banking, Bank of cellular, ATM and fast transfers. The establishment of an electronic network that requires the use of an effective banking system overall settlement of all economic sectors also requires the Algerian banks to be ready to receive this technology through the modernization of management and modernization of services (expand the use of credit cards, electronic money, and expansion of the Internet). As well as the development of the banking media to contribute to the dissemination of electronic banking culture in the community. Has been reached that the use of the communications revolution has made e-banking services inevitable impose itself in determining the future of banks and development, has also been reached that there is the impact of electronic commerce on the improvement of banking services through the provision of the information base and extensive refresher on-site research and development, and apply strategies Marketing, all of which help banks to increase the performance of its services, despite the presence of some of the risks of the means of providing electronic service and not the nature of the service itself and clear impact also by changing the shape or location of service from traditional to electronic which works to reduce and the costs of providing high-quality service and thus access to the largest segment.

Keywords: e-commerce, e-banking, impact e-commerce, B2C

Procedia PDF Downloads 89
570 Practical Software for Optimum Bore Hole Cleaning Using Drilling Hydraulics Techniques

Authors: Abdulaziz F. Ettir, Ghait Bashir, Tarek S. Duzan

Abstract:

A proper well planning is very vital to achieve any successful drilling program on the basis of preventing, overcome all drilling problems and minimize cost operations. Since the hydraulic system plays an active role during the drilling operations, that will lead to accelerate the drilling effort and lower the overall well cost. Likewise, an improperly designed hydraulic system can slow drill rate, fail to clean the hole of cuttings, and cause kicks. In most cases, common sense and commercially available computer programs are the only elements required to design the hydraulic system. Drilling optimization is the logical process of analyzing effects and interactions of drilling variables through applied drilling and hydraulic equations and mathematical modeling to achieve maximum drilling efficiency with minimize drilling cost. In this paper, practical software adopted in this paper to define drilling optimization models including four different optimum keys, namely Opti-flow, Opti-clean, Opti-slip and Opti-nozzle that can help to achieve high drilling efficiency with lower cost. The used data in this research from vertical and horizontal wells were recently drilled in Waha Oil Company fields. The input data are: Formation type, Geopressures, Hole Geometry, Bottom hole assembly and Mud reghology. Upon data analysis, all the results from wells show that the proposed program provides a high accuracy than that proposed from the company in terms of hole cleaning efficiency, and cost break down if we consider that the actual data as a reference base for all wells. Finally, it is recommended to use the established Optimization calculations software at drilling design to achieve correct drilling parameters that can provide high drilling efficiency, borehole cleaning and all other hydraulic parameters which assist to minimize hole problems and control drilling operation costs.

Keywords: optimum keys, namely opti-flow, opti-clean, opti-slip and opti-nozzle

Procedia PDF Downloads 319
569 Thermal Stability and Electrical Conductivity of Ca₅Mg₄₋ₓMₓ(VO₄)₆ (0 ≤ x ≤ 4) where M = Zn, Ni Measured by Impedance Spectroscopy

Authors: Anna S. Tolkacheva, Sergey N. Shkerin, Kirill G. Zemlyanoi, Olga G. Reznitskikh, Pavel D. Khavlyuk

Abstract:

Calcium oxovanadates with garnet related structure are multifunctional oxides in various fields like photoluminescence, microwave dielectrics, and magneto-dielectrics. For example, vanadate garnets are self-luminescent compounds. They attract attention as RE-free broadband excitation and emission phosphors and are candidate materials for UV-based white light-emitting diodes (WLEDs). Ca₅M₄(VO₄)₆ (M = Mg, Zn, Co, Ni, Mn) compounds are also considered promising for application in microwave devices as substrate materials. However, the relation between their structure, composition and physical/chemical properties remains unclear. Given the above-listed observations, goals of this study are to synthesise Ca₅M₄(VO₄)₆ (M = Mg, Zn, Ni) and to study their thermal and electrical properties. Solid solutions Ca₅Mg₄₋ₓMₓ(VO₄)₆ (0 ≤ x ≤ 4) where M is Zn and Ni have been synthesized by sol-gel method. The single-phase character of the final products was checked by powder X-ray diffraction on a Rigaku D/MAX-2200 X-ray diffractometer using Cu Kα radiation in the 2θ range from 15° to 70°. The dependence of thermal properties on chemical composition of solid solutions was studied using simultaneous thermal analyses (DSC and TG). Thermal analyses were conducted in a Netzch simultaneous analyser STA 449C Jupiter, in Ar atmosphere, in temperature range from 25 to 1100°C heat rate was 10 K·min⁻¹. Coefficients of thermal expansion (CTE) were obtained by dilatometry measurements in air up to 800°C using a Netzsch 402PC dilatometer; heat rate was 1 K·min⁻¹. Impedance spectra were obtained via the two-probe technique with an impedance meter Parstat 2273 in air up to 700°C with the variation of pH₂O from 0.04 to 3.35 kPa. Cation deficiency in Ca and Mg sublattice under the substitution of MgO with ZnO up to 1/6 was observed using Rietveld refinement of the crystal structure. Melting point was found to decrease with x changing from 0 to 4 in Ca₅Mg₄₋ₓMₓ(VO₄)₆ where M is Zn and Ni. It was observed that electrical conductivity does not depend on air humidity. The reported study was funded by the RFBR Grant No. 17–03–01280. Sample attestation was carried out in the Shared Access Centers at the IHTE UB RAS.

Keywords: garnet structure, electrical conductivity, thermal expansion, thermal properties

Procedia PDF Downloads 155
568 Calibration of Mini TEPC and Measurement of Lineal Energy in a Mixed Radiation Field Produced by Neutrons

Authors: I. C. Cho, W. H. Wen, H. Y. Tsai, T. C. Chao, C. J. Tung

Abstract:

Tissue-equivalent proportional counter (TEPC) is a useful instrument used to measure radiation single-event energy depositions in a subcellular target volume. The quantity of measurements is the microdosimetric lineal energy, which determines the relative biological effectiveness, RBE, for radiation therapy or the radiation-weighting factor, WR, for radiation protection. TEPC is generally used in a mixed radiation field, where each component radiation has its own RBE or WR value. To reduce the pile-up effect during radiotherapy measurements, a miniature TEPC (mini TEPC) with cavity size in the order of 1 mm may be required. In the present work, a homemade mini TEPC with a cylindrical cavity of 1 mm in both the diameter and the height was constructed to measure the lineal energy spectrum of a mixed radiation field with high- and low-LET radiations. Instead of using external radiation beams to penetrate the detector wall, mixed radiation fields were produced by the interactions of neutrons with TEPC walls that contained small plugs of different materials, i.e. Li, B, A150, Cd and N. In all measurements, mini TEPC was placed at the beam port of the Tsing Hua Open-pool Reactor (THOR). Measurements were performed using the propane-based tissue-equivalent gas mixture, i.e. 55% C3H8, 39.6% CO2 and 5.4% N2 by partial pressures. The gas pressure of 422 torr was applied for the simulation of a 1 m diameter biological site. The calibration of mini TEPC was performed using two marking points in the lineal energy spectrum, i.e. proton edge and electron edge. Measured spectra revealed high lineal energy (> 100 keV/m) peaks due to neutron-capture products, medium lineal energy (10 – 100 keV/m) peaks from hydrogen-recoil protons, and low lineal energy (< 10 keV/m) peaks of reactor photons. For cases of Li and B plugs, the high lineal energy peaks were quite prominent. The medium lineal energy peaks were in the decreasing order of Li, Cd, N, A150, and B. The low lineal energy peaks were smaller compared to other peaks. This study demonstrated that internally produced mixed radiations from the interactions of neutrons with different plugs in the TEPC wall provided a useful approach for TEPC measurements of lineal energies.

Keywords: TEPC, lineal energy, microdosimetry, radiation quality

Procedia PDF Downloads 470
567 Exploring Multimodal Communication: Intersections of Language, Gesture, and Technology

Authors: Rasha Ali Dheyab

Abstract:

In today's increasingly interconnected and technologically-driven world, communication has evolved beyond traditional verbal exchanges. This paper delves into the fascinating realm of multimodal communication, a dynamic field at the intersection of linguistics, gesture studies, and technology. The study of how humans convey meaning through a combination of spoken language, gestures, facial expressions, and digital platforms has gained prominence as our modes of interaction continue to diversify. This exploration begins by examining the foundational theories in linguistics and gesture studies, tracing their historical development and mutual influences. It further investigates the role of nonverbal cues, such as gestures and facial expressions, in augmenting and sometimes even altering the meanings conveyed by spoken language. Additionally, the paper delves into the modern technological landscape, where emojis, GIFs, and other digital symbols have emerged as new linguistic tools, reshaping the ways in which we communicate and express emotions. The interaction between traditional and digital modes of communication is a central focus of this study. The paper investigates how technology has not only introduced new modes of expression but has also influenced the adaptation of existing linguistic and gestural patterns in online discourse. The emergence of virtual reality and augmented reality environments introduces yet another layer of complexity to multimodal communication, offering new avenues for studying how humans navigate and negotiate meaning in immersive digital spaces. Through a combination of literature review, case studies, and theoretical analysis, this paper seeks to shed light on the intricate interplay between language, gesture, and technology in the realm of multimodal communication. By understanding how these diverse modes of expression intersect and interact, we gain valuable insights into the ever-evolving nature of human communication and its implications for fields ranging from linguistics and psychology to human-computer interaction and digital anthropology.

Keywords: multimodal communication, linguistics ., gesture studies., emojis., verbal communication., digital

Procedia PDF Downloads 81
566 Differences in Production of Knowledge between Internationally Mobile versus Nationally Mobile and Non-Mobile Scientists

Authors: Valeria Aman

Abstract:

The presented study examines the impact of international mobility on knowledge production among mobile scientists and within the sending and receiving research groups. Scientists are relevant to the dynamics of knowledge production because scientific knowledge is mainly characterized by embeddedness and tacitness. International mobility enables the dissemination of scientific knowledge to other places and encourages new combinations of knowledge. It can also increase the interdisciplinarity of research by forming synergetic combinations of knowledge. Particularly innovative ideas can have their roots in related research domains and are sometimes transferred only through the physical mobility of scientists. Diversity among scientists with respect to their knowledge base can act as an engine for the creation of knowledge. It is therefore relevant to study how knowledge acquired through international mobility affects the knowledge production process. In certain research domains, international mobility may be essential to contextualize knowledge and to gain access to knowledge located at distant places. The knowledge production process contingent on the type of international mobility and the epistemic culture of a research field is examined. The production of scientific knowledge is a multi-faceted process, the output of which is mainly published in scholarly journals. Therefore, the study builds upon publication and citation data covered in Elsevier’s Scopus database for the period of 1996 to 2015. To analyse these data, bibliometric and social network analysis techniques are used. A basic analysis of scientific output using publication data, citation data and data on co-authored publications is combined with a content map analysis. Abstracts of publications indicate whether a research stay abroad makes an original contribution methodologically, theoretically or empirically. Moreover, co-citations are analysed to map linkages among scientists and emerging research domains. Finally, acknowledgements are studied that can function as channels of formal and informal communication between the actors involved in the process of knowledge production. The results provide better understanding of how the international mobility of scientists contributes to the production of knowledge, by contrasting the knowledge production dynamics of internationally mobile scientists with those being nationally mobile or immobile. Findings also allow indicating whether international mobility accelerates the production of knowledge and the emergence of new research fields.

Keywords: bibliometrics, diversity, interdisciplinarity, international mobility, knowledge production

Procedia PDF Downloads 293
565 A System Architecture for Hand Gesture Control of Robotic Technology: A Case Study Using a Myo™ Arm Band, DJI Spark™ Drone, and a Staubli™ Robotic Manipulator

Authors: Sebastian van Delden, Matthew Anuszkiewicz, Jayse White, Scott Stolarski

Abstract:

Industrial robotic manipulators have been commonplace in the manufacturing world since the early 1960s, and unmanned aerial vehicles (drones) have only begun to realize their full potential in the service industry and the military. The omnipresence of these technologies in their respective fields will only become more potent in coming years. While these technologies have greatly evolved over the years, the typical approach to human interaction with these robots has not. In the industrial robotics realm, a manipulator is typically jogged around using a teach pendant and programmed using a networked computer or the teach pendant itself via a proprietary software development platform. Drones are typically controlled using a two-handed controller equipped with throttles, buttons, and sticks, an app that can be downloaded to one’s mobile device, or a combination of both. This application-oriented work offers a novel approach to human interaction with both unmanned aerial vehicles and industrial robotic manipulators via hand gestures and movements. Two systems have been implemented, both of which use a Myo™ armband to control either a drone (DJI Spark™) or a robotic arm (Stäubli™ TX40). The methodologies developed by this work present a mapping of armband gestures (fist, finger spread, swing hand in, swing hand out, swing arm left/up/down/right, etc.) to either drone or robot arm movements. The findings of this study present the efficacy and limitations (precision and ergonomic) of hand gesture control of two distinct types of robotic technology. All source code associated with this project will be open sourced and placed on GitHub. In conclusion, this study offers a framework that maps hand and arm gestures to drone and robot arm control. The system has been implemented using current ubiquitous technologies, and these software artifacts will be open sourced for future researchers or practitioners to use in their work.

Keywords: human robot interaction, drones, gestures, robotics

Procedia PDF Downloads 157
564 Superchaotropicity: Grafted Surface to Probe the Adsorption of Nano-Ions

Authors: Raimoana Frogier, Luc Girard, Pierre Bauduin, Diane Rebiscoul, Olivier Diat

Abstract:

Nano-ions (NIs) are ionic species or clusters of nanometric size. Their low charge density and the delocalization of their charges give special properties to some of NIs belonging to chemical classes of polyoxometalates (POMs) or boron clusters. They have the particularity of interacting non-covalently with neutral hydrated surface or interfaces such as assemblies of surface-active molecules (micelles, vesicles, lyotropic liquid crystals), foam bubbles or emulsion droplets. This makes possible to classify those NIs in the Hofmeister series as superchaotropic ions. The mechanism of adsorption is complex, linked to the simultaneous dehydration of the ion and the molecule or supramolecular assembly with which it can interact, all with an enthalpic gain on the free energy of the system. This interaction process is reversible and is sufficiently pronounced to induce changes in molecular and supramolecular shape or conformation, phase transitions in the liquid phase, all at sub-millimolar ionic concentrations. This new property of some NIs opens up new possibilities for applications in fields as varied as biochemistry for solubilization, recovery of metals of interest by foams in the form of NIs... In order to better understand the physico-chemical mechanisms at the origin of this interaction, we use silicon wafers functionalized by non-ionic oligomers (polyethylene glycol chains or PEG) to study in situ by X-ray reflectivity this interaction of NIs with the grafted chains. This study carried out at ESRF (European Synchrotron Radiation Facility) and has shown that the adsorption of the NIs, such as POMs, has a very fast kinetics. Moreover the distribution of the NIs in the grafted PEG chain layer was quantify. These results are very encouraging and confirm what has been observed on soft interfaces such as micelles or foams. The possibility to play on the density, length and chemical nature of the grafted chains makes this system an ideal tool to provide kinetic and thermodynamic information to decipher the complex mechanisms at the origin of this adsorption.

Keywords: adsorption, nano-ions, solid-liquid interface, superchaotropicity

Procedia PDF Downloads 67
563 Analysis of Citation Rate and Data Reuse for Openly Accessible Biodiversity Datasets on Global Biodiversity Information Facility

Authors: Nushrat Khan, Mike Thelwall, Kayvan Kousha

Abstract:

Making research data openly accessible has been mandated by most funders over the last 5 years as it promotes reproducibility in science and reduces duplication of effort to collect the same data. There are evidence that articles that publicly share research data have higher citation rates in biological and social sciences. However, how and whether shared data is being reused is not always intuitive as such information is not easily accessible from the majority of research data repositories. This study aims to understand the practice of data citation and how data is being reused over the years focusing on biodiversity since research data is frequently reused in this field. Metadata of 38,878 datasets including citation counts were collected through the Global Biodiversity Information Facility (GBIF) API for this purpose. GBIF was used as a data source since it provides citation count for datasets, not a commonly available feature for most repositories. Analysis of dataset types, citation counts, creation and update time of datasets suggests that citation rate varies for different types of datasets, where occurrence datasets that have more granular information have higher citation rates than checklist and metadata-only datasets. Another finding is that biodiversity datasets on GBIF are frequently updated, which is unique to this field. Majority of the datasets from the earliest year of 2007 were updated after 11 years, with no dataset that was not updated since creation. For each year between 2007 and 2017, we compared the correlations between update time and citation rate of four different types of datasets. While recent datasets do not show any correlations, 3 to 4 years old datasets show weak correlation where datasets that were updated more recently received high citations. The results are suggestive that it takes several years to cumulate citations for research datasets. However, this investigation found that when searched on Google Scholar or Scopus databases for the same datasets, the number of citations is often not the same as GBIF. Hence future aim is to further explore the citation count system adopted by GBIF to evaluate its reliability and whether it can be applicable to other fields of studies as well.

Keywords: data citation, data reuse, research data sharing, webometrics

Procedia PDF Downloads 178
562 Analyzing Industry-University Collaboration Using Complex Networks and Game Theory

Authors: Elnaz Kanani-Kuchesfehani, Andrea Schiffauerova

Abstract:

Due to the novelty of the nanotechnology science, its highly knowledge intensive content, and its invaluable application in almost all technological fields, the close interaction between university and industry is essential. A possible gap between academic strengths to generate good nanotechnology ideas and industrial capacity to receive them can thus have far-reaching consequences. In order to be able to enhance the collaboration between the two parties, a better understanding of knowledge transfer within the university-industry relationship is needed. The objective of this research is to investigate the research collaboration between academia and industry in Canadian nanotechnology and to propose the best cooperative strategy to maximize the quality of the produced knowledge. First, a network of all Canadian academic and industrial nanotechnology inventors is constructed using the patent data from the USPTO (United States Patent and Trademark Office), and it is analyzed with social network analysis software. The actual level of university-industry collaboration in Canadian nanotechnology is determined and the significance of each group of actors in the network (academic vs. industrial inventors) is assessed. Second, a novel methodology is proposed, in which the network of nanotechnology inventors is assessed from a game theoretic perspective. It involves studying a cooperative game with n players each having at most n-1 decisions to choose from. The equilibrium leads to a strategy for all the players to choose their co-worker in the next period in order to maximize the correlated payoff of the game. The payoffs of the game represent the quality of the produced knowledge based on the citations of the patents. The best suggestion for the next collaborative relationship is provided for each actor from a game theoretic point of view in order to maximize the quality of the produced knowledge. One of the major contributions of this work is the novel approach which combines game theory and social network analysis for the case of large networks. This approach can serve as a powerful tool in the analysis of the strategic interactions of the network actors within the innovation systems and other large scale networks.

Keywords: cooperative strategy, game theory, industry-university collaboration, knowledge production, social network analysis

Procedia PDF Downloads 258
561 Formula Student Car: Design, Analysis and Lap Time Simulation

Authors: Rachit Ahuja, Ayush Chugh

Abstract:

Aerodynamic forces and moments, as well as tire-road forces largely affects the maneuverability of the vehicle. Car manufacturers are largely fascinated and influenced by various aerodynamic improvements made in formula cars. There is constant effort of applying these aerodynamic improvements in road vehicles. In motor racing, the key differentiating factor in a high performance car is its ability to maintain highest possible acceleration in appropriate direction. One of the main areas of concern in motor racing is balance of aerodynamic forces and stream line the flow of air across the body of the vehicle. At present, formula racing cars are regulated by stringent FIA norms, there are constrains for dimensions of the vehicle, engine capacity etc. So one of the fields in which there is a large scope of improvement is aerodynamics of the vehicle. In this project work, an attempt has been made to design a formula- student (FS) car, improve its aerodynamic characteristics through steady state CFD simulations and simultaneously calculate its lap time. Initially, a CAD model of a formula student car is made using SOLIDWORKS as per the given dimensions and a steady-state external air-flow simulation is performed on the baseline model of the formula student car without any add on device to evaluate and analyze the air-flow pattern around the car and aerodynamic forces using FLUENT Solver. A detailed survey on different add-on devices used in racing application like: - front wing, diffuser, shark pin, T- wing etc. is made and geometric model of these add-on devices are created. These add-on devices are assembled with the baseline model. Steady state CFD simulations are done on the modified car to evaluate the aerodynamic effects of these add-on devices on the car. Later comparison of lap time simulation of the formula student car with and without the add-on devices is done with the help of MATLAB. Aerodynamic performances like: - lift, drag and their coefficients are evaluated for different configuration and design of the add-on devices at different speed of the vehicle. From parametric CFD simulations on formula student car attached with add-on devices, there is a considerable amount of drag and lift force reduction besides streamlining the airflow across the car. The best possible configuration of these add-on devices is obtained from these CFD simulations and also use of these add-on devices have shown an improvement in performance of the car which can be compared by various lap time simulations of the car.

Keywords: aerodynamic performance, front wing, laptime simulation, t-wing

Procedia PDF Downloads 197
560 Career Guidance System Using Machine Learning

Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan

Abstract:

Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should properly evaluate their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, Neural Networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable to offer an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.

Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills

Procedia PDF Downloads 80
559 Numerical Analysis of Heat Transfer in Water Channels of the Opposed-Piston Diesel Engine

Authors: Michal Bialy, Marcin Szlachetka, Mateusz Paszko

Abstract:

This paper discusses the CFD results of heat transfer in water channels in the engine body. The research engine was a newly designed Diesel combustion engine. The engine has three cylinders with three pairs of opposed pistons inside. The engine will be able to generate 100 kW mechanical power at a crankshaft speed of 3,800-4,000 rpm. The water channels are in the engine body along the axis of the three cylinders. These channels are around the three combustion chambers. The water channels transfer combustion heat that occurs the cylinders to the external radiator. This CFD research was based on the ANSYS Fluent software and aimed to optimize the geometry of the water channels. These channels should have a maximum flow of heat from the combustion chamber or the external radiator. Based on the parallel simulation research, the boundary and initial conditions enabled us to specify average values of key parameters for our numerical analysis. Our simulation used the average momentum equations and turbulence model k-epsilon double equation. There was also used a real k-epsilon model with a function of a standard wall. The turbulence intensity factor was 10%. The working fluid mass flow rate was calculated for a single typical value, specified in line with the research into the flow rate of automotive engine cooling pumps used in engines of similar power. The research uses a series of geometric models which differ, for instance, in the shape of the cross-section of the channel along the axis of the cylinder. The results are presented as colourful distribution maps of temperature, speed fields and heat flow through the cylinder walls. Due to limitations of space, our paper presents the results on the most representative geometric model only. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK ‘PZL-KALISZ’ S.A. and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.

Keywords: Ansys fluent, combustion engine, computational fluid dynamics CFD, cooling system

Procedia PDF Downloads 219
558 Problem Solving: Process or Product? A Mathematics Approach to Problem Solving in Knowledge Management

Authors: A. Giannakopoulos, S. B. Buckley

Abstract:

Problem solving in any field is recognised as a prerequisite for any advancement in knowledge. For example in South Africa it is one of the seven critical outcomes of education together with critical thinking. As a systematic way to problem solving was initiated in mathematics by the great mathematician George Polya (the father of problem solving), more detailed and comprehensive ways in problem solving have been developed. This paper is based on the findings by the author and subsequent recommendations for further research in problem solving and critical thinking. Although the study was done in mathematics, there is no doubt by now in almost anyone’s mind that mathematics is involved to a greater or a lesser extent in all fields, from symbols, to variables, to equations, to logic, to critical thinking. Therefore it stands to reason that mathematical principles and learning cannot be divorced from any field. In management of knowledge situations, the types of problems are similar to mathematics problems varying from simple to analogical to complex; from well-structured to ill-structured problems. While simple problems could be solved by employees by adhering to prescribed sequential steps (the process), analogical and complex problems cannot be proceduralised and that diminishes the capacity of the organisation of knowledge creation and innovation. The low efficiency in some organisations and the low pass rates in mathematics prompted the author to view problem solving as a product. The authors argue that using mathematical approaches to knowledge management problem solving and treating problem solving as a product will empower the employee through further training to tackle analogical and complex problems. The question the authors asked was: If it is true that problem solving and critical thinking are indeed basic skills necessary for advancement of knowledge why is there so little literature of knowledge management (KM) about them and how they are connected and advance KM?This paper concludes with a conceptual model which is based on general accepted principles of knowledge acquisition (developing a learning organisation), knowledge creation, sharing, disseminating and storing thereof, the five pillars of knowledge management (KM). This model, also expands on Gray’s framework on KM practices and problem solving and opens the doors to a new approach to training employees in general and domain specific areas problems which can be adapted in any type of organisation.

Keywords: critical thinking, knowledge management, mathematics, problem solving

Procedia PDF Downloads 596
557 Convergence Results of Two-Dimensional Homogeneous Elastic Plates from Truncation of Potential Energy

Authors: Erick Pruchnicki, Nikhil Padhye

Abstract:

Plates are important engineering structures which have attracted extensive research since the 19th century. The subject of this work is statical analysis of a linearly elastic homogenous plate under small deformations. A 'thin plate' is a three-dimensional structure comprising of a small transverse dimension with respect to a flat mid-surface. The general aim of any plate theory is to deduce a two-dimensional model, in terms of mid-surface quantities, to approximately and accurately describe the plate's deformation in terms of mid-surface quantities. In recent decades, a common starting point for this purpose is to utilize series expansion of a displacement field across the thickness dimension in terms of the thickness parameter (h). These attempts are mathematically consistent in deriving leading-order plate theories based on certain a priori scaling between the thickness and the applied loads; for example, asymptotic methods which are aimed at generating leading-order two-dimensional variational problems by postulating formal asymptotic expansion of the displacement fields. Such methods rigorously generate a hierarchy of two-dimensional models depending on the order of magnitude of the applied load with respect to the plate-thickness. However, in practice, applied loads are external and thus not directly linked or dependent on the geometry/thickness of the plate; thus, rendering any such model (based on a priori scaling) of limited practical utility. In other words, the main limitation of these approaches is that they do not furnish a single plate model for all orders of applied loads. Following analogy of recent efforts of deploying Fourier-series expansion to study convergence of reduced models, we propose two-dimensional model(s) resulting from truncation of the potential energy and rigorously prove the convergence of these two-dimensional plate models to the parent three-dimensional linear elasticity with increasing truncation order of the potential energy.

Keywords: plate theory, Fourier-series expansion, convergence result, Legendre polynomials

Procedia PDF Downloads 112
556 Career Guidance System Using Machine Learning

Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan

Abstract:

Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should evaluate properly their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, neural networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable of offering an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.

Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills

Procedia PDF Downloads 70
555 Small-Group Case-Based Teaching: Effects on Student Achievement, Critical Thinking, and Attitude toward Chemistry

Authors: Reynante E. Autida, Maria Ana T. Quimbo

Abstract:

The chemistry education curriculum provides an excellent avenue where students learn the principles and concepts in chemistry and at the same time, as a central science, better understand related fields. However, the teaching approach used by teachers affects student learning. Cased-based teaching (CBT) is one of the various forms of inductive method. The teacher starts with specifics then proceeds to the general principles. The students’ role in inductive learning shifts from being passive in the traditional approach to being active in learning. In this paper, the effects of Small-Group Case-Based Teaching (SGCBT) on college chemistry students’ achievement, critical thinking, and attitude toward chemistry including the relationships between each of these variables were determined. A quasi-experimental counterbalanced design with pre-post control group was used to determine the effects of SGCBT on Engineering students of four intact classes (two treatment groups and two control groups) in one of the State Universities in Mindanao. The independent variables are the type of teaching approach (SGCBT versus pure lecture-discussion teaching or PLDT) while the dependent variables are chemistry achievement (exam scores) and scores in critical thinking and chemistry attitude. Both Analysis of Covariance (ANCOVA) and t-tests (within and between groups and gain scores) were used to compare the effects of SGCBT versus PLDT on students’ chemistry achievement, critical thinking, and attitude toward chemistry, while Pearson product-moment correlation coefficients were calculated to determine the relationships between each of the variables. Results show that the use of SGCBT fosters positive attitude toward chemistry and provides some indications as well on improved chemistry achievement of students compared with PLDT. Meanwhile, the effects of PLDT and SGCBT on critical thinking are comparable. Furthermore, correlational analysis and focus group interviews indicate that the use of SGCBT not only supports development of positive attitude towards chemistry but also improves chemistry achievement of students. Implications are provided in view of the recent findings on SGCBT and topics for further research are presented as well.

Keywords: case-based teaching, small-group learning, chemistry cases, chemistry achievement, critical thinking, chemistry attitude

Procedia PDF Downloads 297
554 3D Interpenetrated Network Based on 1,3-Benzenedicarboxylate and 1,2-Bis(4-Pyridyl) Ethane

Authors: Laura Bravo-García, Gotzone Barandika, Begoña Bazán, M. Karmele Urtiaga, Luis M. Lezama, María I. Arriortua

Abstract:

Solid coordination networks (SCNs) are materials consisting of metal ions or clusters that are linked by polyfunctional organic ligands and can be designed to form tridimensional frameworks. Their structural features, as for example high surface areas, thermal stability, and in other cases large cavities, have opened a wide range of applications in fields like drug delivery, host-guest chemistry, biomedical imaging, chemical sensing, heterogeneous catalysis and others referred to greenhouse gases storage or even separation. In this sense, the use of polycarboxylate anions and dipyridyl ligands is an effective strategy to produce extended structures with the needed characteristics for these applications. In this context, a novel compound, [Cu4(m-BDC)4(bpa)2DMF]•DMF has been obtained by microwave synthesis, where m-BDC is 1,3-benzenedicarboxylate and bpa 1,2-bis(4-pyridyl)ethane. The crystal structure can be described as a three dimensional framework formed by two equal, interpenetrated networks. Each network consists of two different CuII dimers. Dimer 1 have two coppers with a square pyramidal coordination, and dimer 2 have one with a square pyramidal coordination and other with octahedral one, the last dimer is unique in literature. Therefore, the combination of both type of dimers is unprecedented. Thus, benzenedicarboxylate ligands form sinusoidal chains between the same type of dimers, and also connect both chains forming these layers in the (100) plane. These layers are connected along the [100] direction through the bpa ligand, giving rise to a 3D network with 10 Å2 voids in average. However, the fact that there are two interpenetrated networks results in a significant reduction of the available volume. Structural analysis was carried out by means of single crystal X-ray diffraction and IR spectroscopy. Thermal and magnetic properties have been measured by means of thermogravimetry (TG), X-ray thermodiffractometry (TDX), and electron paramagnetic resonance (EPR). Additionally, CO2 and CH4 high pressure adsorption measurements have been carried out for this compound.

Keywords: gas adsorption, interpenetrated networks, magnetic measurements, solid coordination network (SCN), thermal stability

Procedia PDF Downloads 323
553 Study of the Montmorillonite Effect on PET/Clay and PEN/Clay Nanocomposites

Authors: F. Zouai, F. Z. Benabid, S. Bouhelal, D. Benachour

Abstract:

Nanocomposite polymer / clay are relatively important area of research. These reinforced plastics have attracted considerable attention in scientific and industrial fields because a very small amount of clay can significantly improve the properties of the polymer. The polymeric matrices used in this work are two saturated polyesters ie polyethylene terephthalate (PET) and polyethylene naphthalate (PEN).The success of processing compatible blends, based on poly(ethylene terephthalate) (PET)/ poly(ethylene naphthalene) (PEN)/clay nanocomposites in one step by reactive melt extrusion is described. Untreated clay was first purified and functionalized ‘in situ’ with a compound based on an organic peroxide/ sulfur mixture and (tetramethylthiuram disulfide) as the activator for sulfur. The PET and PEN materials were first separately mixed in the molten state with functionalized clay. The PET/4 wt% clay and PEN/7.5 wt% clay compositions showed total exfoliation. These compositions, denoted nPET and nPEN, respectively, were used to prepare new n(PET/PEN) nanoblends in the same mixing batch. The n(PET/PEN) nanoblends were compared to neat PET/PEN blends. The blends and nanocomposites were characterized using various techniques. Microstructural and nanostructural properties were investigated. Fourier transform infrared spectroscopy (FTIR) results showed that the exfoliation of tetrahedral clay nanolayers is complete and the octahedral structure totally disappears. It was shown that total exfoliation, confirmed by wide angle X-ray scattering (WAXS) measurements, contributes to the enhancement of impact strength and tensile modulus. In addition, WAXS results indicated that all samples are amorphous. The differential scanning calorimetry (DSC) study indicated the occurrence of one glass transition temperature Tg, one crystallization temperature Tc and one melting temperature Tm for every composition. This was evidence that both PET/PEN and nPET/nPEN blends are compatible in the entire range of compositions. In addition, the nPET/nPEN blends showed lower Tc and higher Tm values than the corresponding neat PET/PEN blends. In conclusion, the results obtained indicate that n(PET/PEN) blends are different from the pure ones in nanostructure and physical behavior.

Keywords: blends, exfoliation, DRX, DSC, montmorillonite, nanocomposites, PEN, PET, plastograph, reactive melt-mixing

Procedia PDF Downloads 298
552 Simulations to Predict Solar Energy Potential by ERA5 Application at North Africa

Authors: U. Ali Rahoma, Nabil Esawy, Fawzia Ibrahim Moursy, A. H. Hassan, Samy A. Khalil, Ashraf S. Khamees

Abstract:

The design of any solar energy conversion system requires the knowledge of solar radiation data obtained over a long period. Satellite data has been widely used to estimate solar energy where no ground observation of solar radiation is available, yet there are limitations on the temporal coverage of satellite data. Reanalysis is a “retrospective analysis” of the atmosphere parameters generated by assimilating observation data from various sources, including ground observation, satellites, ships, and aircraft observation with the output of NWP (Numerical Weather Prediction) models, to develop an exhaustive record of weather and climate parameters. The evaluation of the performance of reanalysis datasets (ERA-5) for North Africa against high-quality surface measured data was performed using statistical analysis. The estimation of global solar radiation (GSR) distribution over six different selected locations in North Africa during ten years from the period time 2011 to 2020. The root means square error (RMSE), mean bias error (MBE) and mean absolute error (MAE) of reanalysis data of solar radiation range from 0.079 to 0.222, 0.0145 to 0.198, and 0.055 to 0.178, respectively. The seasonal statistical analysis was performed to study seasonal variation of performance of datasets, which reveals the significant variation of errors in different seasons—the performance of the dataset changes by changing the temporal resolution of the data used for comparison. The monthly mean values of data show better performance, but the accuracy of data is compromised. The solar radiation data of ERA-5 is used for preliminary solar resource assessment and power estimation. The correlation coefficient (R2) varies from 0.93 to 99% for the different selected sites in North Africa in the present research. The goal of this research is to give a good representation for global solar radiation to help in solar energy application in all fields, and this can be done by using gridded data from European Centre for Medium-Range Weather Forecasts ECMWF and producing a new model to give a good result.

Keywords: solar energy, solar radiation, ERA-5, potential energy

Procedia PDF Downloads 211
551 Numerical Study of the Breakdown of Surface Divergence Based Models for Interfacial Gas Transfer Velocity at Large Contamination Levels

Authors: Yasemin Akar, Jan G. Wissink, Herlina Herlina

Abstract:

The effect of various levels of contamination on the interfacial air–water gas transfer velocity is studied by Direct Numerical Simulation (DNS). The interfacial gas transfer is driven by isotropic turbulence, introduced at the bottom of the computational domain, diffusing upwards. The isotropic turbulence is generated in a separate, concurrently running the large-eddy simulation (LES). The flow fields in the main DNS and the LES are solved using fourth-order discretisations of convection and diffusion. To solve the transport of dissolved gases in water, a fifth-order-accurate WENO scheme is used for scalar convection combined with a fourth-order central discretisation for scalar diffusion. The damping effect of the surfactant contamination on the near surface (horizontal) velocities in the DNS is modelled using horizontal gradients of the surfactant concentration. An important parameter in this model, which corresponds to the level of contamination, is ReMa⁄We, where Re is the Reynolds number, Ma is the Marangoni number, and We is the Weber number. It was previously found that even small levels of contamination (ReMa⁄We small) lead to a significant drop in the interfacial gas transfer velocity KL. It is known that KL depends on both the Schmidt number Sc (ratio of the kinematic viscosity and the gas diffusivity in water) and the surface divergence β, i.e. K_L∝√(β⁄Sc). Previously it has been shown that this relation works well for surfaces with low to moderate contamination. However, it will break down for β close to zero. To study the validity of this dependence in the presence of surface contamination, simulations were carried out for ReMa⁄We=0,0.12,0.6,1.2,6,30 and Sc = 2, 4, 8, 16, 32. First, it will be shown that the scaling of KL with Sc remains valid also for larger ReMa⁄We. This is an important result that indicates that - for various levels of contamination - the numerical results obtained at low Schmidt numbers are also valid for significantly higher and more realistic Sc. Subsequently, it will be shown that - with increasing levels of ReMa⁄We - the dependency of KL on β begins to break down as the increased damping of near surface fluctuations results in an increased damping of β. Especially for large levels of contamination, this damping is so severe that KL is found to be underestimated significantly.

Keywords: contamination, gas transfer, surfactants, turbulence

Procedia PDF Downloads 300
550 Computational Linguistic Implications of Gender Bias: Machines Reflect Misogyny in Society

Authors: Irene Yi

Abstract:

Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Computational linguistics is a growing field dealing with such issues of data collection for technological development. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Computational analysis on such linguistic data is used to find patterns of misogyny. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately.

Keywords: computational analysis, gendered grammar, misogynistic language, neural networks

Procedia PDF Downloads 119
549 Synthesis, Characterization and Rheological Properties of Boronoxide, Polymer Nanocomposites

Authors: Mehmet Doğan, Mahir Alkan, Yasemin Turhan, Zürriye Gündüz, Pinar Beyli, Serap Doğan

Abstract:

Advances and new discoveries in the field of the material science on the basis of technological developments have played an important role. Today, material science is branched the lower branches such as metals, nonmetals, chemicals, polymers. The polymeric nano composites have found a wide application field as one of the most important among these groups. Many polymers used in the different fields of the industry have been desired to improve the thermal stability. One of the ways to improve this property of the polymers is to form the nano composite products of them using different fillers. There are many using area of boron compounds and is increasing day by day. In order to the further increasing of the variety of using area of boron compounds and industrial importance, it is necessary to synthesis of nano-products and to find yourself new application areas of these products. In this study, PMMA/boronoxide nano composites were synthesized using solution intercalation, polymerization and melting methods; and PAA/boronoxide nano composites using solution intercalation method. Furthermore, rheological properties of nano composites synthesed according to melting method were also studied. Nano composites were characterized by XRD, FTIR-ATR, DTA/TG, BET, SEM, and TEM instruments. The effects of filler material amount, solvent types and mediating reagent on the thermal stability of polymers were investigated. In addition, the rheological properties of PMMA/boronoxide nano composites synthesized by melting method were investigated using High Pressure Capillary Rheometer. XRD analysis showed that boronoxide was dispersed in polymer matrix; FTIR-ATR that there were interactions with boronoxide between PAA and PMMA; and TEM that boronoxide particles had spherical structure, and dispersed in nano sized dimension in polymer matrix; the thermal stability of polymers was increased with the adding of boronoxide in polymer matrix; the decomposition mechanism of PAA was changed. From rheological measurements, it was found that PMMA and PMMA/boronoxide nano composites exhibited non-Newtonian, pseudo-plastic, shear thinning behavior under all experimental conditions.

Keywords: boronoxide, polymer, nanocomposite, rheology, characterization

Procedia PDF Downloads 433
548 Influence Zone of Strip Footing on Untreated and Cement Treated Sand Mat Underlain by Soft Clay (2nd reviewed)

Authors: Sharifullah Ahmed

Abstract:

Shallow foundation on soft soils without ground improvement can represent a high level of settlement. In such a case, an alternative to pile foundations may be shallow strip footings placed on a soil system in which the upper layer is untreated or cement-treated compacted sand to limit the settlement within a permissible level. This research work deals with a rigid plane-strain strip footing of 2.5m width placed on a soil consisting of untreated or cement treated sand layer underlain by homogeneous soft clay. Both the thin and thick compared the footing width was considered. The soft inorganic cohesive NC clay layer is considered undrained for plastic loading stages and drained in consolidation stages, and the sand layer is drained in all loading stages. FEM analysis was done using PLAXIS 2D Version 8.0 with a model consisting of clay deposits of 15m thickness and 18m width. The soft clay layer was modeled using the Hardening Soil Model, Soft Soil Model, Soft Soil Creep model, and the upper improvement layer was modeled using only the Hardening Soil Model. The system is considered fully saturated. The value of natural void ratio 1.2 is used. Total displacement fields of strip footing and subsoil layers in the case of Untreated and Cement treated Sand as Upper layer are presented. For Hi/B =0.6 or above, the distribution of major deformation within an upper layer and the influence zone of footing is limited in an upper layer which indicates the complete effectiveness of the upper layer in bearing the foundation effectively in case of the untreated upper layer. For Hi/B =0.3 or above, the distribution of major deformation occurred within an upper layer, and the function of footing is limited in the upper layer. This indicates the complete effectiveness of the cement-treated upper layer. Brittle behavior of cemented sand and fracture or cracks is not considered in this analysis.

Keywords: displacement, ground improvement, influence depth, PLAXIS 2D, primary and secondary settlement, sand mat, soft clay

Procedia PDF Downloads 93
547 Biological Hazards and Laboratory inflicted Infections in Sub-Saharan Africa

Authors: Godfrey Muiya Mukala

Abstract:

This research looks at an array of fields in Sub-Saharan Africa comprising agriculture, food enterprises, medicine, organisms genetically modified, microbiology, and nanotechnology that can be gained from biotechnological research and development. Findings into dangerous organisms, mainly bacterial germs, rickettsia, fungi, parasites, or organisms that are genetically engineered, have immensely posed questions attributed to the biological danger they bring forth to human beings and the environment because of their uncertainties. In addition, the recurrence of previously managed diseases or the inception of new diseases are connected to biosafety challenges, especially in rural set-ups in low and middle-income countries. Notably, biotechnology laboratories are required to adopt biosafety measures to protect their workforce, community, environment, and ecosystem from unforeseen materials and organisms. Sensitization and inclusion of educational frameworks for laboratory workers are essential to acquiring a solid knowledge of harmful biological agents. This is in addition to human pathogenicity, susceptibility, and epidemiology to the biological data used in research and development. This article reviews and analyzes research intending to identify the proper implementation of universally accepted practices in laboratory safety and biological hazards. This research identifies ideal microbiological methods, adequate containment equipment, sufficient resources, safety barriers, specific training, and education of the laboratory workforce to decrease and contain biological hazards. Subsequently, knowledge of standardized microbiological techniques and processes, in addition to the employment of containment facilities, protective barriers, and equipment, is far-reaching in preventing occupational infections. Similarly, reduction of risks and prevention may be attained by training, education, and research on biohazards, pathogenicity, and epidemiology of the relevant microorganisms. In this technique, medical professionals in rural setups may adopt the knowledge acquired from the past to project possible concerns in the future.

Keywords: sub-saharan africa, biotechnology, laboratory, infections, health

Procedia PDF Downloads 76
546 Polymer Flooding: Chemical Enhanced Oil Recovery Technique

Authors: Abhinav Bajpayee, Shubham Damke, Rupal Ranjan, Neha Bharti

Abstract:

Polymer flooding is a dramatic improvement in water flooding and quickly becoming one of the EOR technologies. Used for improving oil recovery. With the increasing energy demand and depleting oil reserves EOR techniques are becoming increasingly significant .Since most oil fields have already begun water flooding, chemical EOR technique can be implemented by using fewer resources than any other EOR technique. Polymer helps in increasing the viscosity of injected water thus reducing water mobility and hence achieves a more stable displacement .Polymer flooding helps in increasing the injection viscosity as has been revealed through field experience. While the injection of a polymer solution improves reservoir conformance the beneficial effect ceases as soon as one attempts to push the polymer solution with water. It is most commonly applied technique because of its higher success rate. In polymer flooding, a water-soluble polymer such as Polyacrylamide is added to the water in the water flood. This increases the viscosity of the water to that of a gel making the oil and water greatly improving the efficiency of the water flood. It also improves the vertical and areal sweep efficiency as a consequence of improving the water/oil mobility ratio. Polymer flooding plays an important role in oil exploitation, but around 60 million ton of wastewater is produced per day with oil extraction together. Therefore the treatment and reuse of wastewater becomes significant which can be carried out by electro dialysis technology. This treatment technology can not only decrease environmental pollution, but also achieve closed-circuit of polymer flooding wastewater during crude oil extraction. There are three potential ways in which a polymer flood can make the oil recovery process more efficient: (1) through the effects of polymers on fractional flow, (2) by decreasing the water/oil mobility ratio, and (3) by diverting injected water from zones that have been swept. It has also been suggested that the viscoelastic behavior of polymers can improve displacement efficiency Polymer flooding may also have an economic impact because less water is injected and produced compared with water flooding. In future we need to focus on developing polymers that can be used in reservoirs of high temperature and high salinity, applying polymer flooding in different reservoir conditions and also combine polymer with other processes (e.g., surfactant/ polymer flooding).

Keywords: fractional flow, polymer, viscosity, water/oil mobility ratio

Procedia PDF Downloads 398
545 Quality Assessment of SSRU Program in Education

Authors: Rossukhon Makaramani, Supanan Sittilerd, Wipada Prasarnsaph

Abstract:

The study aimed to 1) examine management status of a Program in Education at the Faculty of Education, Suan Sunandha Rajabhat University (SSRU); 2) determine main components, indicators and criteria for constructing quality assessment framework; 3) assess quality of a SSRU Program in Education; and 4) provide recommendations to promote academic excellence. The program to be assessed was Bachelor of Education Program in Education (5 years), Revised Version 2009. Population and samples were stakeholders involving implementation of this program during an academic year 2012. Results were: 1) Management status of the Program in Education showed that the Faculty of Education depicted good level (4.20) in the third cycle of external quality assessment by the Office for National Education Standards and Quality Assessment (ONESQA). There were 1,192 students enrolling in the program, divided into 5 major fields of study. There were 50 faculty members, 37 holding master’s degrees and 13 holding doctorate degrees. Their academic position consisted of 35 lecturers, 10 assistant professors, and 5 associate professors. For program management, there was a committee of 5 members for the program and also a committee of 4 or 5 members for each major field of study. Among the faculty members, 41 persons taught in this program. The ratio between faculty and student was 1:26. The result of 2013 internal quality assessment indicated that system and mechanism of the program development and management was at fair level. However, the overall result yielded good level either by criteria of the Office of Higher Education Commission (4.29) or the NESQA (4.37); 2) Framework for assessing the quality of the program consisted of 4 dimensions and 15 indicators; 3) Assessment of the program yielded Good level of quality (4.04); 4) Recommendations to promote academic excellence included management and development of the program focusing on teacher reform toward highly recognized profession; cultivation of values, moral, ethics, and spirits of being a teacher; construction of specialized programs; development of faculty potentials; enhancement of the demonstration school’s readiness level; and provision of dormitories for learning.

Keywords: quality assessment, education program, Suan Sunandha Rajabhat University, academic excellence

Procedia PDF Downloads 294
544 Stochastic Matrices and Lp Norms for Ill-Conditioned Linear Systems

Authors: Riadh Zorgati, Thomas Triboulet

Abstract:

In quite diverse application areas such as astronomy, medical imaging, geophysics or nondestructive evaluation, many problems related to calibration, fitting or estimation of a large number of input parameters of a model from a small amount of output noisy data, can be cast as inverse problems. Due to noisy data corruption, insufficient data and model errors, most inverse problems are ill-posed in a Hadamard sense, i.e. existence, uniqueness and stability of the solution are not guaranteed. A wide class of inverse problems in physics relates to the Fredholm equation of the first kind. The ill-posedness of such inverse problem results, after discretization, in a very ill-conditioned linear system of equations, the condition number of the associated matrix can typically range from 109 to 1018. This condition number plays the role of an amplifier of uncertainties on data during inversion and then, renders the inverse problem difficult to handle numerically. Similar problems appear in other areas such as numerical optimization when using interior points algorithms for solving linear programs leads to face ill-conditioned systems of linear equations. Devising efficient solution approaches for such system of equations is therefore of great practical interest. Efficient iterative algorithms are proposed for solving a system of linear equations. The approach is based on a preconditioning of the initial matrix of the system with an approximation of a generalized inverse leading to a stochastic preconditioned matrix. This approach, valid for non-negative matrices, is first extended to hermitian, semi-definite positive matrices and then generalized to any complex rectangular matrices. The main results obtained are as follows: 1) We are able to build a generalized inverse of any complex rectangular matrix which satisfies the convergence condition requested in iterative algorithms for solving a system of linear equations. This completes the (short) list of generalized inverse having this property, after Kaczmarz and Cimmino matrices. Theoretical results on both the characterization of the type of generalized inverse obtained and the convergence are derived. 2) Thanks to its properties, this matrix can be efficiently used in different solving schemes as Richardson-Tanabe or preconditioned conjugate gradients. 3) By using Lp norms, we propose generalized Kaczmarz’s type matrices. We also show how Cimmino's matrix can be considered as a particular case consisting in choosing the Euclidian norm in an asymmetrical structure. 4) Regarding numerical results obtained on some pathological well-known test-cases (Hilbert, Nakasaka, …), some of the proposed algorithms are empirically shown to be more efficient on ill-conditioned problems and more robust to error propagation than the known classical techniques we have tested (Gauss, Moore-Penrose inverse, minimum residue, conjugate gradients, Kaczmarz, Cimmino). We end on a very early prospective application of our approach based on stochastic matrices aiming at computing some parameters (such as the extreme values, the mean, the variance, …) of the solution of a linear system prior to its resolution. Such an approach, if it were to be efficient, would be a source of information on the solution of a system of linear equations.

Keywords: conditioning, generalized inverse, linear system, norms, stochastic matrix

Procedia PDF Downloads 136
543 Investigation of Different Surface Oxidation Methods on Pyrolytic Carbon

Authors: Lucija Pustahija, Christine Bandl, Wolfgang Kern, Christian Mitterer

Abstract:

Concerning today´s ecological demands, producing reliable materials from sustainable resources is a continuously developing topic. Such an example is the production of carbon materials via pyrolysis of natural gases or biomass. The amazing properties of pyrolytic carbon are utilized in various fields, where in particular the application in building industry is a promising way towards the utilization of pyrolytic carbon and composites based on pyrolytic carbon. For many applications, surface modification of carbon is an important step in tailoring its properties. Therefore, in this paper, an investigation of different oxidation methods was performed to prepare the carbon surface before functionalizing it with organosilanes, which act as coupling agents for epoxy and polyurethane resins. Made in such a way, a building material based on carbon composites could be used as a lightweight, durable material that can be applied where water or air filtration / purification is needed. In this work, both wet and dry oxidation were investigated. Wet oxidation was first performed in solutions of nitric acid (at 120 °C and 150 °C) followed by oxidation in hydrogen peroxide (80 °C) for 3 and 6 h. Moreover, a hydrothermal method (under oxygen gas) in autoclaves was investigated. Dry oxidation was performed under plasma and corona discharges, using different power values to elaborate optimum conditions. Selected samples were then (in preliminary experiments) subjected to a silanization of the surface with amino and glycidoxy organosilanes. The functionalized surfaces were examined by X-ray photon spectroscopy and Fourier transform infrared spectroscopy spectroscopy, and by scanning electron microscopy. The results of wet and dry oxidation methods indicated that the creation of functionalities was influenced by temperature, the concentration of the reagents (and gases) and the duration of the treatment. Sequential oxidation in aq. HNO₃ and H₂O₂ results in a higher content of oxygen functionalities at lower concentrations of oxidizing agents, when compared to oxidizing the carbon with concentrated nitric acid. Plasma oxidation results in non-permanent functionalization on the carbon surface, by which it´s necessary to find adequate parameters of oxidation treatments that could enable longer stability of functionalities. Results of the functionalization of the carbon surfaces with organosilanes will be presented as well.

Keywords: building materials, dry oxidation, organosilanes, pyrolytic carbon, resins, surface functionalization, wet oxidation

Procedia PDF Downloads 100