Search results for: conditional random fields
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4530

Search results for: conditional random fields

900 Thermal Stability and Electrical Conductivity of Ca₅Mg₄₋ₓMₓ(VO₄)₆ (0 ≤ x ≤ 4) where M = Zn, Ni Measured by Impedance Spectroscopy

Authors: Anna S. Tolkacheva, Sergey N. Shkerin, Kirill G. Zemlyanoi, Olga G. Reznitskikh, Pavel D. Khavlyuk

Abstract:

Calcium oxovanadates with garnet related structure are multifunctional oxides in various fields like photoluminescence, microwave dielectrics, and magneto-dielectrics. For example, vanadate garnets are self-luminescent compounds. They attract attention as RE-free broadband excitation and emission phosphors and are candidate materials for UV-based white light-emitting diodes (WLEDs). Ca₅M₄(VO₄)₆ (M = Mg, Zn, Co, Ni, Mn) compounds are also considered promising for application in microwave devices as substrate materials. However, the relation between their structure, composition and physical/chemical properties remains unclear. Given the above-listed observations, goals of this study are to synthesise Ca₅M₄(VO₄)₆ (M = Mg, Zn, Ni) and to study their thermal and electrical properties. Solid solutions Ca₅Mg₄₋ₓMₓ(VO₄)₆ (0 ≤ x ≤ 4) where M is Zn and Ni have been synthesized by sol-gel method. The single-phase character of the final products was checked by powder X-ray diffraction on a Rigaku D/MAX-2200 X-ray diffractometer using Cu Kα radiation in the 2θ range from 15° to 70°. The dependence of thermal properties on chemical composition of solid solutions was studied using simultaneous thermal analyses (DSC and TG). Thermal analyses were conducted in a Netzch simultaneous analyser STA 449C Jupiter, in Ar atmosphere, in temperature range from 25 to 1100°C heat rate was 10 K·min⁻¹. Coefficients of thermal expansion (CTE) were obtained by dilatometry measurements in air up to 800°C using a Netzsch 402PC dilatometer; heat rate was 1 K·min⁻¹. Impedance spectra were obtained via the two-probe technique with an impedance meter Parstat 2273 in air up to 700°C with the variation of pH₂O from 0.04 to 3.35 kPa. Cation deficiency in Ca and Mg sublattice under the substitution of MgO with ZnO up to 1/6 was observed using Rietveld refinement of the crystal structure. Melting point was found to decrease with x changing from 0 to 4 in Ca₅Mg₄₋ₓMₓ(VO₄)₆ where M is Zn and Ni. It was observed that electrical conductivity does not depend on air humidity. The reported study was funded by the RFBR Grant No. 17–03–01280. Sample attestation was carried out in the Shared Access Centers at the IHTE UB RAS.

Keywords: garnet structure, electrical conductivity, thermal expansion, thermal properties

Procedia PDF Downloads 156
899 Calibration of Mini TEPC and Measurement of Lineal Energy in a Mixed Radiation Field Produced by Neutrons

Authors: I. C. Cho, W. H. Wen, H. Y. Tsai, T. C. Chao, C. J. Tung

Abstract:

Tissue-equivalent proportional counter (TEPC) is a useful instrument used to measure radiation single-event energy depositions in a subcellular target volume. The quantity of measurements is the microdosimetric lineal energy, which determines the relative biological effectiveness, RBE, for radiation therapy or the radiation-weighting factor, WR, for radiation protection. TEPC is generally used in a mixed radiation field, where each component radiation has its own RBE or WR value. To reduce the pile-up effect during radiotherapy measurements, a miniature TEPC (mini TEPC) with cavity size in the order of 1 mm may be required. In the present work, a homemade mini TEPC with a cylindrical cavity of 1 mm in both the diameter and the height was constructed to measure the lineal energy spectrum of a mixed radiation field with high- and low-LET radiations. Instead of using external radiation beams to penetrate the detector wall, mixed radiation fields were produced by the interactions of neutrons with TEPC walls that contained small plugs of different materials, i.e. Li, B, A150, Cd and N. In all measurements, mini TEPC was placed at the beam port of the Tsing Hua Open-pool Reactor (THOR). Measurements were performed using the propane-based tissue-equivalent gas mixture, i.e. 55% C3H8, 39.6% CO2 and 5.4% N2 by partial pressures. The gas pressure of 422 torr was applied for the simulation of a 1 m diameter biological site. The calibration of mini TEPC was performed using two marking points in the lineal energy spectrum, i.e. proton edge and electron edge. Measured spectra revealed high lineal energy (> 100 keV/m) peaks due to neutron-capture products, medium lineal energy (10 – 100 keV/m) peaks from hydrogen-recoil protons, and low lineal energy (< 10 keV/m) peaks of reactor photons. For cases of Li and B plugs, the high lineal energy peaks were quite prominent. The medium lineal energy peaks were in the decreasing order of Li, Cd, N, A150, and B. The low lineal energy peaks were smaller compared to other peaks. This study demonstrated that internally produced mixed radiations from the interactions of neutrons with different plugs in the TEPC wall provided a useful approach for TEPC measurements of lineal energies.

Keywords: TEPC, lineal energy, microdosimetry, radiation quality

Procedia PDF Downloads 470
898 Exploring Multimodal Communication: Intersections of Language, Gesture, and Technology

Authors: Rasha Ali Dheyab

Abstract:

In today's increasingly interconnected and technologically-driven world, communication has evolved beyond traditional verbal exchanges. This paper delves into the fascinating realm of multimodal communication, a dynamic field at the intersection of linguistics, gesture studies, and technology. The study of how humans convey meaning through a combination of spoken language, gestures, facial expressions, and digital platforms has gained prominence as our modes of interaction continue to diversify. This exploration begins by examining the foundational theories in linguistics and gesture studies, tracing their historical development and mutual influences. It further investigates the role of nonverbal cues, such as gestures and facial expressions, in augmenting and sometimes even altering the meanings conveyed by spoken language. Additionally, the paper delves into the modern technological landscape, where emojis, GIFs, and other digital symbols have emerged as new linguistic tools, reshaping the ways in which we communicate and express emotions. The interaction between traditional and digital modes of communication is a central focus of this study. The paper investigates how technology has not only introduced new modes of expression but has also influenced the adaptation of existing linguistic and gestural patterns in online discourse. The emergence of virtual reality and augmented reality environments introduces yet another layer of complexity to multimodal communication, offering new avenues for studying how humans navigate and negotiate meaning in immersive digital spaces. Through a combination of literature review, case studies, and theoretical analysis, this paper seeks to shed light on the intricate interplay between language, gesture, and technology in the realm of multimodal communication. By understanding how these diverse modes of expression intersect and interact, we gain valuable insights into the ever-evolving nature of human communication and its implications for fields ranging from linguistics and psychology to human-computer interaction and digital anthropology.

Keywords: multimodal communication, linguistics ., gesture studies., emojis., verbal communication., digital

Procedia PDF Downloads 82
897 Differences in Production of Knowledge between Internationally Mobile versus Nationally Mobile and Non-Mobile Scientists

Authors: Valeria Aman

Abstract:

The presented study examines the impact of international mobility on knowledge production among mobile scientists and within the sending and receiving research groups. Scientists are relevant to the dynamics of knowledge production because scientific knowledge is mainly characterized by embeddedness and tacitness. International mobility enables the dissemination of scientific knowledge to other places and encourages new combinations of knowledge. It can also increase the interdisciplinarity of research by forming synergetic combinations of knowledge. Particularly innovative ideas can have their roots in related research domains and are sometimes transferred only through the physical mobility of scientists. Diversity among scientists with respect to their knowledge base can act as an engine for the creation of knowledge. It is therefore relevant to study how knowledge acquired through international mobility affects the knowledge production process. In certain research domains, international mobility may be essential to contextualize knowledge and to gain access to knowledge located at distant places. The knowledge production process contingent on the type of international mobility and the epistemic culture of a research field is examined. The production of scientific knowledge is a multi-faceted process, the output of which is mainly published in scholarly journals. Therefore, the study builds upon publication and citation data covered in Elsevier’s Scopus database for the period of 1996 to 2015. To analyse these data, bibliometric and social network analysis techniques are used. A basic analysis of scientific output using publication data, citation data and data on co-authored publications is combined with a content map analysis. Abstracts of publications indicate whether a research stay abroad makes an original contribution methodologically, theoretically or empirically. Moreover, co-citations are analysed to map linkages among scientists and emerging research domains. Finally, acknowledgements are studied that can function as channels of formal and informal communication between the actors involved in the process of knowledge production. The results provide better understanding of how the international mobility of scientists contributes to the production of knowledge, by contrasting the knowledge production dynamics of internationally mobile scientists with those being nationally mobile or immobile. Findings also allow indicating whether international mobility accelerates the production of knowledge and the emergence of new research fields.

Keywords: bibliometrics, diversity, interdisciplinarity, international mobility, knowledge production

Procedia PDF Downloads 294
896 Brazilian Public Security: Governability and Constitutional Change

Authors: Gabriel Dolabella, Henrique Rangel, Stella Araújo, Carlos Bolonha, Igor de Lazari

Abstract:

Public security is a common subject on the Brazilian political agenda. The seventh largest economy in the world has high crime and insecurity rates. Specialists try to explain this social picture based on poverty, inequality or public policies addressed to drug trafficking. This excerpt approaches State measures to handle that picture. Therefore, the public security - law enforcement institutions - is at the core of this paper, particularly the relationship among federal and state law enforcement agencies, mainly ruled by a system of urgency. The problems are informal changes on law enforcement management and public opinion collaboration to these changes. Whenever there were huge international events, Brazilian armed forces occupied streets to assure law enforcement - ensuring the order. This logic, considered in the long time, could impact the federal structure of the country. The post-madisonian theorists verify that urgency is often associated to delegation of powers, which is true for Brazilian law enforcement, but here there is a different delegation: States continuously delegate law enforcement powers to the federal government throughout the use of Armed Forces. Therefore, the hypothesis is: Brazil is under a political process of federalization of public security. The political framework addressed here can be explained by the disrespect of legal constraints and the failure of rule of law theoretical models. The methodology of analysis is based on general criteria. Temporally, this study investigates events from 2003, when discussions about the disarmament statute begun. Geographically, this study is limited to Brazilian borders. Materially, the analysis result from the observation of legal resources and political resources (pronouncements of government officials). The main parameters are based on post-madisonianism and federalization of public security can be assessed through credibility and popularity that allow evaluation of this political process of constitutional change. The objective is to demonstrate how the Military Forces are used in public security, not as a random fact or an isolated political event, in order to understand the political motivations and effects that stem from that use from an institutional perspective.

Keywords: public security, governability, rule of law, federalism

Procedia PDF Downloads 678
895 Assessment of Vehicular Emission and Its Impact on Urban Air Quality

Authors: Syed Imran Hussain Shah

Abstract:

Air pollution rapidly impacts the Earth's climate and environmental quality, causing public health nuisances and cardio-pulmonary illnesses. Air pollution is a global issue, and all population groups in all the regions in the developed and developing parts of the world were affected by it. The promise of a reduction in deaths and diseases as per SDG No. 3 is an international commitment towards sustainable development. In that context, assessing and evaluating the ambient air quality is paramount. This article estimates the air pollution released by the vehicles on roads of Lahore, a mega city having 13.98 million populations. A survey was conducted on different fuel stations to determine the estimated fuel pumped to different types of vehicles from different fuel stations. The number of fuel stations in Lahore is around 350. Another survey was also conducted to interview the drivers to know the per-litre fuel consumption of other vehicles. Therefore, a survey was conducted on 189 fuel stations and 400 drivers using a combination of random sampling and convenience sampling methods. The sampling was done in a manner to cover all areas of the city including central commercial hubs, modern housing societies, industrial zones, main highways, old traditional population centres, etc. Mathematical equations were also used to estimate the emissions from different modes of vehicles. Due to the increase in population, the number of vehicles is increasing, and consequently, traffic emissions were rising at a higher level. Motorcycles, auto rickshaws, motor cars, and vans were the main contributors to Carbon dioxide and vehicular emissions in the air. It has been observed that vehicles that use petrol fuel produce more Carbon dioxide emissions in the air. Buses and trucks were the main contributors to NOx in the air due to the use of diesel fuel. Whereas vans, buses, and trucks produce the maximum amount of SO2. PM10 and PM2.5 were mainly produced by motorcycles and motorcycle two-stroke rickshaws. Auto rickshaws and motor cars mainly produce benzene emissions. This study may act as a major tool for traffic and vehicle policy decisions to promote better fuel quality and more fuel-efficient vehicles to reduce emissions.

Keywords: particulate matter, nitrogen dioxide, climate change, pollution control

Procedia PDF Downloads 15
894 A System Architecture for Hand Gesture Control of Robotic Technology: A Case Study Using a Myo™ Arm Band, DJI Spark™ Drone, and a Staubli™ Robotic Manipulator

Authors: Sebastian van Delden, Matthew Anuszkiewicz, Jayse White, Scott Stolarski

Abstract:

Industrial robotic manipulators have been commonplace in the manufacturing world since the early 1960s, and unmanned aerial vehicles (drones) have only begun to realize their full potential in the service industry and the military. The omnipresence of these technologies in their respective fields will only become more potent in coming years. While these technologies have greatly evolved over the years, the typical approach to human interaction with these robots has not. In the industrial robotics realm, a manipulator is typically jogged around using a teach pendant and programmed using a networked computer or the teach pendant itself via a proprietary software development platform. Drones are typically controlled using a two-handed controller equipped with throttles, buttons, and sticks, an app that can be downloaded to one’s mobile device, or a combination of both. This application-oriented work offers a novel approach to human interaction with both unmanned aerial vehicles and industrial robotic manipulators via hand gestures and movements. Two systems have been implemented, both of which use a Myo™ armband to control either a drone (DJI Spark™) or a robotic arm (Stäubli™ TX40). The methodologies developed by this work present a mapping of armband gestures (fist, finger spread, swing hand in, swing hand out, swing arm left/up/down/right, etc.) to either drone or robot arm movements. The findings of this study present the efficacy and limitations (precision and ergonomic) of hand gesture control of two distinct types of robotic technology. All source code associated with this project will be open sourced and placed on GitHub. In conclusion, this study offers a framework that maps hand and arm gestures to drone and robot arm control. The system has been implemented using current ubiquitous technologies, and these software artifacts will be open sourced for future researchers or practitioners to use in their work.

Keywords: human robot interaction, drones, gestures, robotics

Procedia PDF Downloads 161
893 Superchaotropicity: Grafted Surface to Probe the Adsorption of Nano-Ions

Authors: Raimoana Frogier, Luc Girard, Pierre Bauduin, Diane Rebiscoul, Olivier Diat

Abstract:

Nano-ions (NIs) are ionic species or clusters of nanometric size. Their low charge density and the delocalization of their charges give special properties to some of NIs belonging to chemical classes of polyoxometalates (POMs) or boron clusters. They have the particularity of interacting non-covalently with neutral hydrated surface or interfaces such as assemblies of surface-active molecules (micelles, vesicles, lyotropic liquid crystals), foam bubbles or emulsion droplets. This makes possible to classify those NIs in the Hofmeister series as superchaotropic ions. The mechanism of adsorption is complex, linked to the simultaneous dehydration of the ion and the molecule or supramolecular assembly with which it can interact, all with an enthalpic gain on the free energy of the system. This interaction process is reversible and is sufficiently pronounced to induce changes in molecular and supramolecular shape or conformation, phase transitions in the liquid phase, all at sub-millimolar ionic concentrations. This new property of some NIs opens up new possibilities for applications in fields as varied as biochemistry for solubilization, recovery of metals of interest by foams in the form of NIs... In order to better understand the physico-chemical mechanisms at the origin of this interaction, we use silicon wafers functionalized by non-ionic oligomers (polyethylene glycol chains or PEG) to study in situ by X-ray reflectivity this interaction of NIs with the grafted chains. This study carried out at ESRF (European Synchrotron Radiation Facility) and has shown that the adsorption of the NIs, such as POMs, has a very fast kinetics. Moreover the distribution of the NIs in the grafted PEG chain layer was quantify. These results are very encouraging and confirm what has been observed on soft interfaces such as micelles or foams. The possibility to play on the density, length and chemical nature of the grafted chains makes this system an ideal tool to provide kinetic and thermodynamic information to decipher the complex mechanisms at the origin of this adsorption.

Keywords: adsorption, nano-ions, solid-liquid interface, superchaotropicity

Procedia PDF Downloads 67
892 Analysis of Citation Rate and Data Reuse for Openly Accessible Biodiversity Datasets on Global Biodiversity Information Facility

Authors: Nushrat Khan, Mike Thelwall, Kayvan Kousha

Abstract:

Making research data openly accessible has been mandated by most funders over the last 5 years as it promotes reproducibility in science and reduces duplication of effort to collect the same data. There are evidence that articles that publicly share research data have higher citation rates in biological and social sciences. However, how and whether shared data is being reused is not always intuitive as such information is not easily accessible from the majority of research data repositories. This study aims to understand the practice of data citation and how data is being reused over the years focusing on biodiversity since research data is frequently reused in this field. Metadata of 38,878 datasets including citation counts were collected through the Global Biodiversity Information Facility (GBIF) API for this purpose. GBIF was used as a data source since it provides citation count for datasets, not a commonly available feature for most repositories. Analysis of dataset types, citation counts, creation and update time of datasets suggests that citation rate varies for different types of datasets, where occurrence datasets that have more granular information have higher citation rates than checklist and metadata-only datasets. Another finding is that biodiversity datasets on GBIF are frequently updated, which is unique to this field. Majority of the datasets from the earliest year of 2007 were updated after 11 years, with no dataset that was not updated since creation. For each year between 2007 and 2017, we compared the correlations between update time and citation rate of four different types of datasets. While recent datasets do not show any correlations, 3 to 4 years old datasets show weak correlation where datasets that were updated more recently received high citations. The results are suggestive that it takes several years to cumulate citations for research datasets. However, this investigation found that when searched on Google Scholar or Scopus databases for the same datasets, the number of citations is often not the same as GBIF. Hence future aim is to further explore the citation count system adopted by GBIF to evaluate its reliability and whether it can be applicable to other fields of studies as well.

Keywords: data citation, data reuse, research data sharing, webometrics

Procedia PDF Downloads 178
891 Analyzing Industry-University Collaboration Using Complex Networks and Game Theory

Authors: Elnaz Kanani-Kuchesfehani, Andrea Schiffauerova

Abstract:

Due to the novelty of the nanotechnology science, its highly knowledge intensive content, and its invaluable application in almost all technological fields, the close interaction between university and industry is essential. A possible gap between academic strengths to generate good nanotechnology ideas and industrial capacity to receive them can thus have far-reaching consequences. In order to be able to enhance the collaboration between the two parties, a better understanding of knowledge transfer within the university-industry relationship is needed. The objective of this research is to investigate the research collaboration between academia and industry in Canadian nanotechnology and to propose the best cooperative strategy to maximize the quality of the produced knowledge. First, a network of all Canadian academic and industrial nanotechnology inventors is constructed using the patent data from the USPTO (United States Patent and Trademark Office), and it is analyzed with social network analysis software. The actual level of university-industry collaboration in Canadian nanotechnology is determined and the significance of each group of actors in the network (academic vs. industrial inventors) is assessed. Second, a novel methodology is proposed, in which the network of nanotechnology inventors is assessed from a game theoretic perspective. It involves studying a cooperative game with n players each having at most n-1 decisions to choose from. The equilibrium leads to a strategy for all the players to choose their co-worker in the next period in order to maximize the correlated payoff of the game. The payoffs of the game represent the quality of the produced knowledge based on the citations of the patents. The best suggestion for the next collaborative relationship is provided for each actor from a game theoretic point of view in order to maximize the quality of the produced knowledge. One of the major contributions of this work is the novel approach which combines game theory and social network analysis for the case of large networks. This approach can serve as a powerful tool in the analysis of the strategic interactions of the network actors within the innovation systems and other large scale networks.

Keywords: cooperative strategy, game theory, industry-university collaboration, knowledge production, social network analysis

Procedia PDF Downloads 259
890 Housing Price Dynamics: Comparative Study of 1980-1999 and the New Millenium

Authors: Janne Engblom, Elias Oikarinen

Abstract:

The understanding of housing price dynamics is of importance to a great number of agents: to portfolio investors, banks, real estate brokers and construction companies as well as to policy makers and households. A panel dataset is one that follows a given sample of individuals over time, and thus provides multiple observations on each individual in the sample. Panel data models include a variety of fixed and random effects models which form a wide range of linear models. A special case of panel data models is dynamic in nature. A complication regarding a dynamic panel data model that includes the lagged dependent variable is endogeneity bias of estimates. Several approaches have been developed to account for this problem. In this paper, the panel models were estimated using the Common Correlated Effects estimator (CCE) of dynamic panel data which also accounts for cross-sectional dependence which is caused by common structures of the economy. In presence of cross-sectional dependence standard OLS gives biased estimates. In this study, U.S housing price dynamics were examined empirically using the dynamic CCE estimator with first-difference of housing price as the dependent and first-differences of per capita income, interest rate, housing stock and lagged price together with deviation of housing prices from their long-run equilibrium level as independents. These deviations were also estimated from the data. The aim of the analysis was to provide estimates with comparisons of estimates between 1980-1999 and 2000-2012. Based on data of 50 U.S cities over 1980-2012 differences of short-run housing price dynamics estimates were mostly significant when two time periods were compared. Significance tests of differences were provided by the model containing interaction terms of independents and time dummy variable. Residual analysis showed very low cross-sectional correlation of the model residuals compared with the standard OLS approach. This means a good fit of CCE estimator model. Estimates of the dynamic panel data model were in line with the theory of housing price dynamics. Results also suggest that dynamics of a housing market is evolving over time.

Keywords: dynamic model, panel data, cross-sectional dependence, interaction model

Procedia PDF Downloads 252
889 Formula Student Car: Design, Analysis and Lap Time Simulation

Authors: Rachit Ahuja, Ayush Chugh

Abstract:

Aerodynamic forces and moments, as well as tire-road forces largely affects the maneuverability of the vehicle. Car manufacturers are largely fascinated and influenced by various aerodynamic improvements made in formula cars. There is constant effort of applying these aerodynamic improvements in road vehicles. In motor racing, the key differentiating factor in a high performance car is its ability to maintain highest possible acceleration in appropriate direction. One of the main areas of concern in motor racing is balance of aerodynamic forces and stream line the flow of air across the body of the vehicle. At present, formula racing cars are regulated by stringent FIA norms, there are constrains for dimensions of the vehicle, engine capacity etc. So one of the fields in which there is a large scope of improvement is aerodynamics of the vehicle. In this project work, an attempt has been made to design a formula- student (FS) car, improve its aerodynamic characteristics through steady state CFD simulations and simultaneously calculate its lap time. Initially, a CAD model of a formula student car is made using SOLIDWORKS as per the given dimensions and a steady-state external air-flow simulation is performed on the baseline model of the formula student car without any add on device to evaluate and analyze the air-flow pattern around the car and aerodynamic forces using FLUENT Solver. A detailed survey on different add-on devices used in racing application like: - front wing, diffuser, shark pin, T- wing etc. is made and geometric model of these add-on devices are created. These add-on devices are assembled with the baseline model. Steady state CFD simulations are done on the modified car to evaluate the aerodynamic effects of these add-on devices on the car. Later comparison of lap time simulation of the formula student car with and without the add-on devices is done with the help of MATLAB. Aerodynamic performances like: - lift, drag and their coefficients are evaluated for different configuration and design of the add-on devices at different speed of the vehicle. From parametric CFD simulations on formula student car attached with add-on devices, there is a considerable amount of drag and lift force reduction besides streamlining the airflow across the car. The best possible configuration of these add-on devices is obtained from these CFD simulations and also use of these add-on devices have shown an improvement in performance of the car which can be compared by various lap time simulations of the car.

Keywords: aerodynamic performance, front wing, laptime simulation, t-wing

Procedia PDF Downloads 198
888 Career Guidance System Using Machine Learning

Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan

Abstract:

Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should properly evaluate their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, Neural Networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable to offer an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.

Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills

Procedia PDF Downloads 81
887 Linguistic Competencies of Students with Hearing Impairment

Authors: Munawar Malik, Muntaha Ahmad, Khalil Ullah Khan

Abstract:

Linguistic abilities in students with hearing impairment yet remain a concern for educationists. The emerging technological support and provisions in recent era vows to have addressed the situation and claims significant contribution in terms of linguistic repertoire. Being a descriptive and quantitative paradigm of study, the purpose of this research set forth was to assess linguistic competencies of students with hearing impairment in English language. The goals were further broken down to identify level of reading abilities in the subject population. The population involved students with HI studying at higher secondary level in Lahore. Simple random sampling technique was used to choose a sample of fifty students. A purposive curriculum-based assessment was designed in line with accelerated learning program by Punjab Government, to assess Linguistic competence among the sample. Further to it, an Informal Reading Inventory (IRI) corresponding to reading levels was also developed by researchers duly validated and piloted before the final use. Descriptive and inferential statistics were utilized to reach to the findings. Spearman’s correlation was used to find out relationship between degree of hearing loss, grade level, gender and type of amplification device. Independent sample t-test was used to compare means among groups. Major findings of the study revealed that students with hearing impairment exhibit significant deviation from the mean scores when compared in terms of grades, severity and amplification device. The study divulged that respective students with HI have yet failed to qualify an independent level of reading according to their grades as majority falls at frustration level of word recognition and passage comprehension. The poorer performance can be attributed to lower linguistic competence as it shows in the frustration levels of reading, writing and comprehension. The correlation analysis did reflect an improved performance grade wise, however scores could only correspond to frustration level and independent levels was never achieved. Reported achievements at instructional level of subject population may further to linguistic skills if practiced purposively.

Keywords: linguistic competence, hearing impairment, reading levels, educationist

Procedia PDF Downloads 69
886 Numerical Analysis of Heat Transfer in Water Channels of the Opposed-Piston Diesel Engine

Authors: Michal Bialy, Marcin Szlachetka, Mateusz Paszko

Abstract:

This paper discusses the CFD results of heat transfer in water channels in the engine body. The research engine was a newly designed Diesel combustion engine. The engine has three cylinders with three pairs of opposed pistons inside. The engine will be able to generate 100 kW mechanical power at a crankshaft speed of 3,800-4,000 rpm. The water channels are in the engine body along the axis of the three cylinders. These channels are around the three combustion chambers. The water channels transfer combustion heat that occurs the cylinders to the external radiator. This CFD research was based on the ANSYS Fluent software and aimed to optimize the geometry of the water channels. These channels should have a maximum flow of heat from the combustion chamber or the external radiator. Based on the parallel simulation research, the boundary and initial conditions enabled us to specify average values of key parameters for our numerical analysis. Our simulation used the average momentum equations and turbulence model k-epsilon double equation. There was also used a real k-epsilon model with a function of a standard wall. The turbulence intensity factor was 10%. The working fluid mass flow rate was calculated for a single typical value, specified in line with the research into the flow rate of automotive engine cooling pumps used in engines of similar power. The research uses a series of geometric models which differ, for instance, in the shape of the cross-section of the channel along the axis of the cylinder. The results are presented as colourful distribution maps of temperature, speed fields and heat flow through the cylinder walls. Due to limitations of space, our paper presents the results on the most representative geometric model only. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK ‘PZL-KALISZ’ S.A. and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.

Keywords: Ansys fluent, combustion engine, computational fluid dynamics CFD, cooling system

Procedia PDF Downloads 221
885 Problem Solving: Process or Product? A Mathematics Approach to Problem Solving in Knowledge Management

Authors: A. Giannakopoulos, S. B. Buckley

Abstract:

Problem solving in any field is recognised as a prerequisite for any advancement in knowledge. For example in South Africa it is one of the seven critical outcomes of education together with critical thinking. As a systematic way to problem solving was initiated in mathematics by the great mathematician George Polya (the father of problem solving), more detailed and comprehensive ways in problem solving have been developed. This paper is based on the findings by the author and subsequent recommendations for further research in problem solving and critical thinking. Although the study was done in mathematics, there is no doubt by now in almost anyone’s mind that mathematics is involved to a greater or a lesser extent in all fields, from symbols, to variables, to equations, to logic, to critical thinking. Therefore it stands to reason that mathematical principles and learning cannot be divorced from any field. In management of knowledge situations, the types of problems are similar to mathematics problems varying from simple to analogical to complex; from well-structured to ill-structured problems. While simple problems could be solved by employees by adhering to prescribed sequential steps (the process), analogical and complex problems cannot be proceduralised and that diminishes the capacity of the organisation of knowledge creation and innovation. The low efficiency in some organisations and the low pass rates in mathematics prompted the author to view problem solving as a product. The authors argue that using mathematical approaches to knowledge management problem solving and treating problem solving as a product will empower the employee through further training to tackle analogical and complex problems. The question the authors asked was: If it is true that problem solving and critical thinking are indeed basic skills necessary for advancement of knowledge why is there so little literature of knowledge management (KM) about them and how they are connected and advance KM?This paper concludes with a conceptual model which is based on general accepted principles of knowledge acquisition (developing a learning organisation), knowledge creation, sharing, disseminating and storing thereof, the five pillars of knowledge management (KM). This model, also expands on Gray’s framework on KM practices and problem solving and opens the doors to a new approach to training employees in general and domain specific areas problems which can be adapted in any type of organisation.

Keywords: critical thinking, knowledge management, mathematics, problem solving

Procedia PDF Downloads 598
884 Convergence Results of Two-Dimensional Homogeneous Elastic Plates from Truncation of Potential Energy

Authors: Erick Pruchnicki, Nikhil Padhye

Abstract:

Plates are important engineering structures which have attracted extensive research since the 19th century. The subject of this work is statical analysis of a linearly elastic homogenous plate under small deformations. A 'thin plate' is a three-dimensional structure comprising of a small transverse dimension with respect to a flat mid-surface. The general aim of any plate theory is to deduce a two-dimensional model, in terms of mid-surface quantities, to approximately and accurately describe the plate's deformation in terms of mid-surface quantities. In recent decades, a common starting point for this purpose is to utilize series expansion of a displacement field across the thickness dimension in terms of the thickness parameter (h). These attempts are mathematically consistent in deriving leading-order plate theories based on certain a priori scaling between the thickness and the applied loads; for example, asymptotic methods which are aimed at generating leading-order two-dimensional variational problems by postulating formal asymptotic expansion of the displacement fields. Such methods rigorously generate a hierarchy of two-dimensional models depending on the order of magnitude of the applied load with respect to the plate-thickness. However, in practice, applied loads are external and thus not directly linked or dependent on the geometry/thickness of the plate; thus, rendering any such model (based on a priori scaling) of limited practical utility. In other words, the main limitation of these approaches is that they do not furnish a single plate model for all orders of applied loads. Following analogy of recent efforts of deploying Fourier-series expansion to study convergence of reduced models, we propose two-dimensional model(s) resulting from truncation of the potential energy and rigorously prove the convergence of these two-dimensional plate models to the parent three-dimensional linear elasticity with increasing truncation order of the potential energy.

Keywords: plate theory, Fourier-series expansion, convergence result, Legendre polynomials

Procedia PDF Downloads 113
883 Role of Estrogen Receptor-alpha in Mammary Carcinoma by Single Nucleotide Polymorphisms and Molecular Docking: An In-silico Analysis

Authors: Asif Bilal, Fouzia Tanvir, Sibtain Ahmad

Abstract:

Estrogen receptor alpha, also known as estrogen receptor-1, is highly involved in risk of mammary carcinoma. The objectives of this study were to identify non-synonymous SNPs of estrogen receptor and their association with breast cancer and to identify the chemotherapeutic responses of phytochemicals against it via in-silico study design. For this purpose, different online tools. to identify pathogenic SNPs the tools were SIFT, Polyphen, Polyphen-2, fuNTRp, SNAP2, for finding disease associated SNPs the tools SNP&GO, PhD-SNP, PredictSNP, MAPP, SNAP, MetaSNP, PANTHER, and to check protein stability Mu-Pro, I-Mutant, and CONSURF were used. Post-translational modifications (PTMs) were detected by Musitedeep, Protein secondary structure by SOPMA, protein to protein interaction by STRING, molecular docking by PyRx. Seven SNPs having rsIDs (rs760766066, rs779180038, rs956399300, rs773683317, rs397509428, rs755020320, and rs1131692059) showing mutations on I229T, R243C, Y246H, P336R, Q375H, R394S, and R394H, respectively found to be completely deleterious. The PTMs found were 96 times Glycosylation; 30 times Ubiquitination, a single time Acetylation; and no Hydroxylation and Phosphorylation were found. The protein secondary structure consisted of Alpha helix (Hh) is (28%), Extended strand (Ee) is (21%), Beta turn (Tt) is 7.89% and Random coil (Cc) is (44.11%). Protein-protein interaction analysis revealed that it has strong interaction with Myeloperoxidase, Xanthine dehydrogenase, carboxylesterase 1, Glutathione S-transferase Mu 1, and with estrogen receptors. For molecular docking we used Asiaticoside, Ilekudinuside, Robustoflavone, Irinoticane, Withanolides, and 9-amin0-5 as ligands that extract from phytochemicals and docked with this protein. We found that there was great interaction (from -8.6 to -9.7) of these ligands of phytochemicals at ESR1 wild and two mutants (I229T and R394S). It is concluded that these SNPs found in ESR1 are involved in breast cancer and given phytochemicals are highly helpful against breast cancer as chemotherapeutic agents. Further in vitro and in vivo analysis should be performed to conduct these interactions.

Keywords: breast cancer, ESR1, phytochemicals, molecular docking

Procedia PDF Downloads 71
882 Career Guidance System Using Machine Learning

Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan

Abstract:

Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should evaluate properly their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, neural networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable of offering an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.

Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills

Procedia PDF Downloads 71
881 Small-Group Case-Based Teaching: Effects on Student Achievement, Critical Thinking, and Attitude toward Chemistry

Authors: Reynante E. Autida, Maria Ana T. Quimbo

Abstract:

The chemistry education curriculum provides an excellent avenue where students learn the principles and concepts in chemistry and at the same time, as a central science, better understand related fields. However, the teaching approach used by teachers affects student learning. Cased-based teaching (CBT) is one of the various forms of inductive method. The teacher starts with specifics then proceeds to the general principles. The students’ role in inductive learning shifts from being passive in the traditional approach to being active in learning. In this paper, the effects of Small-Group Case-Based Teaching (SGCBT) on college chemistry students’ achievement, critical thinking, and attitude toward chemistry including the relationships between each of these variables were determined. A quasi-experimental counterbalanced design with pre-post control group was used to determine the effects of SGCBT on Engineering students of four intact classes (two treatment groups and two control groups) in one of the State Universities in Mindanao. The independent variables are the type of teaching approach (SGCBT versus pure lecture-discussion teaching or PLDT) while the dependent variables are chemistry achievement (exam scores) and scores in critical thinking and chemistry attitude. Both Analysis of Covariance (ANCOVA) and t-tests (within and between groups and gain scores) were used to compare the effects of SGCBT versus PLDT on students’ chemistry achievement, critical thinking, and attitude toward chemistry, while Pearson product-moment correlation coefficients were calculated to determine the relationships between each of the variables. Results show that the use of SGCBT fosters positive attitude toward chemistry and provides some indications as well on improved chemistry achievement of students compared with PLDT. Meanwhile, the effects of PLDT and SGCBT on critical thinking are comparable. Furthermore, correlational analysis and focus group interviews indicate that the use of SGCBT not only supports development of positive attitude towards chemistry but also improves chemistry achievement of students. Implications are provided in view of the recent findings on SGCBT and topics for further research are presented as well.

Keywords: case-based teaching, small-group learning, chemistry cases, chemistry achievement, critical thinking, chemistry attitude

Procedia PDF Downloads 299
880 3D Interpenetrated Network Based on 1,3-Benzenedicarboxylate and 1,2-Bis(4-Pyridyl) Ethane

Authors: Laura Bravo-García, Gotzone Barandika, Begoña Bazán, M. Karmele Urtiaga, Luis M. Lezama, María I. Arriortua

Abstract:

Solid coordination networks (SCNs) are materials consisting of metal ions or clusters that are linked by polyfunctional organic ligands and can be designed to form tridimensional frameworks. Their structural features, as for example high surface areas, thermal stability, and in other cases large cavities, have opened a wide range of applications in fields like drug delivery, host-guest chemistry, biomedical imaging, chemical sensing, heterogeneous catalysis and others referred to greenhouse gases storage or even separation. In this sense, the use of polycarboxylate anions and dipyridyl ligands is an effective strategy to produce extended structures with the needed characteristics for these applications. In this context, a novel compound, [Cu4(m-BDC)4(bpa)2DMF]•DMF has been obtained by microwave synthesis, where m-BDC is 1,3-benzenedicarboxylate and bpa 1,2-bis(4-pyridyl)ethane. The crystal structure can be described as a three dimensional framework formed by two equal, interpenetrated networks. Each network consists of two different CuII dimers. Dimer 1 have two coppers with a square pyramidal coordination, and dimer 2 have one with a square pyramidal coordination and other with octahedral one, the last dimer is unique in literature. Therefore, the combination of both type of dimers is unprecedented. Thus, benzenedicarboxylate ligands form sinusoidal chains between the same type of dimers, and also connect both chains forming these layers in the (100) plane. These layers are connected along the [100] direction through the bpa ligand, giving rise to a 3D network with 10 Å2 voids in average. However, the fact that there are two interpenetrated networks results in a significant reduction of the available volume. Structural analysis was carried out by means of single crystal X-ray diffraction and IR spectroscopy. Thermal and magnetic properties have been measured by means of thermogravimetry (TG), X-ray thermodiffractometry (TDX), and electron paramagnetic resonance (EPR). Additionally, CO2 and CH4 high pressure adsorption measurements have been carried out for this compound.

Keywords: gas adsorption, interpenetrated networks, magnetic measurements, solid coordination network (SCN), thermal stability

Procedia PDF Downloads 325
879 Study of the Montmorillonite Effect on PET/Clay and PEN/Clay Nanocomposites

Authors: F. Zouai, F. Z. Benabid, S. Bouhelal, D. Benachour

Abstract:

Nanocomposite polymer / clay are relatively important area of research. These reinforced plastics have attracted considerable attention in scientific and industrial fields because a very small amount of clay can significantly improve the properties of the polymer. The polymeric matrices used in this work are two saturated polyesters ie polyethylene terephthalate (PET) and polyethylene naphthalate (PEN).The success of processing compatible blends, based on poly(ethylene terephthalate) (PET)/ poly(ethylene naphthalene) (PEN)/clay nanocomposites in one step by reactive melt extrusion is described. Untreated clay was first purified and functionalized ‘in situ’ with a compound based on an organic peroxide/ sulfur mixture and (tetramethylthiuram disulfide) as the activator for sulfur. The PET and PEN materials were first separately mixed in the molten state with functionalized clay. The PET/4 wt% clay and PEN/7.5 wt% clay compositions showed total exfoliation. These compositions, denoted nPET and nPEN, respectively, were used to prepare new n(PET/PEN) nanoblends in the same mixing batch. The n(PET/PEN) nanoblends were compared to neat PET/PEN blends. The blends and nanocomposites were characterized using various techniques. Microstructural and nanostructural properties were investigated. Fourier transform infrared spectroscopy (FTIR) results showed that the exfoliation of tetrahedral clay nanolayers is complete and the octahedral structure totally disappears. It was shown that total exfoliation, confirmed by wide angle X-ray scattering (WAXS) measurements, contributes to the enhancement of impact strength and tensile modulus. In addition, WAXS results indicated that all samples are amorphous. The differential scanning calorimetry (DSC) study indicated the occurrence of one glass transition temperature Tg, one crystallization temperature Tc and one melting temperature Tm for every composition. This was evidence that both PET/PEN and nPET/nPEN blends are compatible in the entire range of compositions. In addition, the nPET/nPEN blends showed lower Tc and higher Tm values than the corresponding neat PET/PEN blends. In conclusion, the results obtained indicate that n(PET/PEN) blends are different from the pure ones in nanostructure and physical behavior.

Keywords: blends, exfoliation, DRX, DSC, montmorillonite, nanocomposites, PEN, PET, plastograph, reactive melt-mixing

Procedia PDF Downloads 298
878 Algorithms Inspired from Human Behavior Applied to Optimization of a Complex Process

Authors: S. Curteanu, F. Leon, M. Gavrilescu, S. A. Floria

Abstract:

Optimization algorithms inspired from human behavior were applied in this approach, associated with neural networks models. The algorithms belong to human behaviors of learning and cooperation and human competitive behavior classes. For the first class, the main strategies include: random learning, individual learning, and social learning, and the selected algorithms are: simplified human learning optimization (SHLO), social learning optimization (SLO), and teaching-learning based optimization (TLBO). For the second class, the concept of learning is associated with competitiveness, and the selected algorithms are sports-inspired algorithms (with Football Game Algorithm, FGA and Volleyball Premier League, VPL) and Imperialist Competitive Algorithm (ICA). A real process, the synthesis of polyacrylamide-based multicomponent hydrogels, where some parameters are difficult to obtain experimentally, is considered as a case study. Reaction yield and swelling degree are predicted as a function of reaction conditions (acrylamide concentration, initiator concentration, crosslinking agent concentration, temperature, reaction time, and amount of inclusion polymer, which could be starch, poly(vinyl alcohol) or gelatin). The experimental results contain 175 data. Artificial neural networks are obtained in optimal form with biologically inspired algorithm; the optimization being perform at two level: structural and parametric. Feedforward neural networks with one or two hidden layers and no more than 25 neurons in intermediate layers were obtained with values of correlation coefficient in the validation phase over 0.90. The best results were obtained with TLBO algorithm, correlation coefficient being 0.94 for an MLP(6:9:20:2) – a feedforward neural network with two hidden layers and 9 and 20, respectively, intermediate neurons. Good results obtained prove the efficiency of the optimization algorithms. More than the good results, what is important in this approach is the simulation methodology, including neural networks and optimization biologically inspired algorithms, which provide satisfactory results. In addition, the methodology developed in this approach is general and has flexibility so that it can be easily adapted to other processes in association with different types of models.

Keywords: artificial neural networks, human behaviors of learning and cooperation, human competitive behavior, optimization algorithms

Procedia PDF Downloads 109
877 Factors Associated with Seroconversion of Oral Polio Vaccine among the Children under 5 Year in District Mirpurkhas, Pakistan 2015

Authors: Muhammad Asif Syed, Mirza Amir Baig

Abstract:

Background: Pakistan is one of the two remaining polio-endemic countries, posing a significant public health challenge for global polio eradication due to failure to interrupt polio transmission. Country specific seroprevalence studies help in the evaluation of immunization program performance, the susceptibility of population against polio virus and identification of existing level of immunity with factors that affect seroconversion of the oral polio vaccine (OPV). The objective of the study was to find out factors associated with seroconversion of the OPV among children 6-59 months in Pakistan. Methods: A Hospital based cross-sectional serosurvey was undertaken in May-June 2015 at District Mirpurkhas, Sindh-Pakistan. Total 180 children aged 6–59 months were selected by using systematic random sampling from Muhammad Medical College Hospital, Mirpurkhas. Demographic, vaccination history and risk factors information were collected from the parents/guardian. Blood sample was collected and tested for the detection of poliovirus IgG antibodies by using ELISA Kit. The IgG titer <10 IU/ml, 50 to <150 IU/ml and >150 IU/ml was defined as negative, weak positive and positive immunity respectively. Pearson Chi-square test was used to determine the difference in seroprevalence in univariate analysis. Results: A total of 180 subjects were enrolled mean age was 23 months (7 -59 months). Off these 160 (89%) children were well and 18 (10%) partially protected against polio virus. Two (1.1%) children had no protection against polio virus as they had <10 IU/ml poliovirus IgG antibodies titer. Both negative cases belong from the female gender, age group 12-23 months, urban area and BMI <50 percentile. There was a difference between normal and the wasting children; it did attain statistical significance (χ2= 35.5, p=0.00). The difference in seroconversion was also observed in relation to the gender (χ2=6.23, p=0.04), duration of breast feeding (χ2=18.6, p=0.04), history of diarrheal disease before polio vaccine administration (χ2=7.7, p=0.02), and stunting (χ2= 114, p=0.00). Conclusion: This study demonstrated that near 90% children achieve seroconversion of OPV and well protected against polio virus. There is an urgent need to focus on factors like duration of breast feeding, diarrheal diseases and malnutrition (acute and chronic) among the children as an immunization strategy.

Keywords: seroconversion, oral polio vaccine, Polio, Pakistan

Procedia PDF Downloads 303
876 Magnetic Bio-Nano-Fluids for Hyperthermia

Authors: Z. Kolacinski, L. Szymanski. G. Raniszewski, D. Koza, L. Pietrzak

Abstract:

Magnetic Bio-Nano-Fluid (BNF) can be composed of a buffer fluid such as plasma and magnetic nanoparticles such as iron, nickel, cobalt and their oxides. However iron is one of the best elements for magnetization by electromagnetic radiation. It can be used as a tool for medical diagnosis and treatment. Radio frequency (RF) radiation is able to heat iron nanoparticles due to magnetic hysteresis. Electromagnetic heating of iron nanoparticles and ferro-fluids BNF can be successfully used for non-invasive thermal ablation of cancer cells. Moreover iron atoms can be carried by carbon nanotubes (CNTs) if iron is used as catalyst for CNTs synthesis. Then CNTs became the iron containers and they screen the iron content against oxidation. We will present a method of CNTs addressing to the required cells. For thermal ablation of cancer cells we use radio frequencies for which the interaction with human body should be limited to minimum. Generally, the application of RF energy fields for medical treatment is justified by deep tissue penetration. The highly iron doped CNTs as the carriers creating magnetic fluid will be presented. An excessive catalyst injection method using electrical furnace and microwave plasma reactor will be presented. This way it is possible to grow the Fe filled CNTs on a moving surface in continuous synthesis process. This also allows producing uniform carpet of the Fe filled CNTs carriers. For the experimental work targeted to cell ablation we used RF generator to measure the increase in temperature for some samples like: solution of Fe2O3 in BNF which can be plasma-like buffer, solutions of pure iron of different concentrations in plasma-like buffer and in buffer used for a cell culture, solutions of carbon nanotubes (MWCNTs) of different concentrations in plasma-like buffer and in buffer used for a cell culture. Then the targeted therapies which can be effective if the carriers are able to distinguish the difference between cancerous and healthy cell’s physiology are considered. We have developed an approach based on ligand-receptor or antibody-antigen interactions for the case of colon cancer.

Keywords: cancer treatment, carbon nano tubes, drag delivery, hyperthermia, iron

Procedia PDF Downloads 416
875 Simulations to Predict Solar Energy Potential by ERA5 Application at North Africa

Authors: U. Ali Rahoma, Nabil Esawy, Fawzia Ibrahim Moursy, A. H. Hassan, Samy A. Khalil, Ashraf S. Khamees

Abstract:

The design of any solar energy conversion system requires the knowledge of solar radiation data obtained over a long period. Satellite data has been widely used to estimate solar energy where no ground observation of solar radiation is available, yet there are limitations on the temporal coverage of satellite data. Reanalysis is a “retrospective analysis” of the atmosphere parameters generated by assimilating observation data from various sources, including ground observation, satellites, ships, and aircraft observation with the output of NWP (Numerical Weather Prediction) models, to develop an exhaustive record of weather and climate parameters. The evaluation of the performance of reanalysis datasets (ERA-5) for North Africa against high-quality surface measured data was performed using statistical analysis. The estimation of global solar radiation (GSR) distribution over six different selected locations in North Africa during ten years from the period time 2011 to 2020. The root means square error (RMSE), mean bias error (MBE) and mean absolute error (MAE) of reanalysis data of solar radiation range from 0.079 to 0.222, 0.0145 to 0.198, and 0.055 to 0.178, respectively. The seasonal statistical analysis was performed to study seasonal variation of performance of datasets, which reveals the significant variation of errors in different seasons—the performance of the dataset changes by changing the temporal resolution of the data used for comparison. The monthly mean values of data show better performance, but the accuracy of data is compromised. The solar radiation data of ERA-5 is used for preliminary solar resource assessment and power estimation. The correlation coefficient (R2) varies from 0.93 to 99% for the different selected sites in North Africa in the present research. The goal of this research is to give a good representation for global solar radiation to help in solar energy application in all fields, and this can be done by using gridded data from European Centre for Medium-Range Weather Forecasts ECMWF and producing a new model to give a good result.

Keywords: solar energy, solar radiation, ERA-5, potential energy

Procedia PDF Downloads 214
874 Numerical Study of the Breakdown of Surface Divergence Based Models for Interfacial Gas Transfer Velocity at Large Contamination Levels

Authors: Yasemin Akar, Jan G. Wissink, Herlina Herlina

Abstract:

The effect of various levels of contamination on the interfacial air–water gas transfer velocity is studied by Direct Numerical Simulation (DNS). The interfacial gas transfer is driven by isotropic turbulence, introduced at the bottom of the computational domain, diffusing upwards. The isotropic turbulence is generated in a separate, concurrently running the large-eddy simulation (LES). The flow fields in the main DNS and the LES are solved using fourth-order discretisations of convection and diffusion. To solve the transport of dissolved gases in water, a fifth-order-accurate WENO scheme is used for scalar convection combined with a fourth-order central discretisation for scalar diffusion. The damping effect of the surfactant contamination on the near surface (horizontal) velocities in the DNS is modelled using horizontal gradients of the surfactant concentration. An important parameter in this model, which corresponds to the level of contamination, is ReMa⁄We, where Re is the Reynolds number, Ma is the Marangoni number, and We is the Weber number. It was previously found that even small levels of contamination (ReMa⁄We small) lead to a significant drop in the interfacial gas transfer velocity KL. It is known that KL depends on both the Schmidt number Sc (ratio of the kinematic viscosity and the gas diffusivity in water) and the surface divergence β, i.e. K_L∝√(β⁄Sc). Previously it has been shown that this relation works well for surfaces with low to moderate contamination. However, it will break down for β close to zero. To study the validity of this dependence in the presence of surface contamination, simulations were carried out for ReMa⁄We=0,0.12,0.6,1.2,6,30 and Sc = 2, 4, 8, 16, 32. First, it will be shown that the scaling of KL with Sc remains valid also for larger ReMa⁄We. This is an important result that indicates that - for various levels of contamination - the numerical results obtained at low Schmidt numbers are also valid for significantly higher and more realistic Sc. Subsequently, it will be shown that - with increasing levels of ReMa⁄We - the dependency of KL on β begins to break down as the increased damping of near surface fluctuations results in an increased damping of β. Especially for large levels of contamination, this damping is so severe that KL is found to be underestimated significantly.

Keywords: contamination, gas transfer, surfactants, turbulence

Procedia PDF Downloads 300
873 Computational Linguistic Implications of Gender Bias: Machines Reflect Misogyny in Society

Authors: Irene Yi

Abstract:

Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Computational linguistics is a growing field dealing with such issues of data collection for technological development. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Computational analysis on such linguistic data is used to find patterns of misogyny. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately.

Keywords: computational analysis, gendered grammar, misogynistic language, neural networks

Procedia PDF Downloads 122
872 Synthesis, Characterization and Rheological Properties of Boronoxide, Polymer Nanocomposites

Authors: Mehmet Doğan, Mahir Alkan, Yasemin Turhan, Zürriye Gündüz, Pinar Beyli, Serap Doğan

Abstract:

Advances and new discoveries in the field of the material science on the basis of technological developments have played an important role. Today, material science is branched the lower branches such as metals, nonmetals, chemicals, polymers. The polymeric nano composites have found a wide application field as one of the most important among these groups. Many polymers used in the different fields of the industry have been desired to improve the thermal stability. One of the ways to improve this property of the polymers is to form the nano composite products of them using different fillers. There are many using area of boron compounds and is increasing day by day. In order to the further increasing of the variety of using area of boron compounds and industrial importance, it is necessary to synthesis of nano-products and to find yourself new application areas of these products. In this study, PMMA/boronoxide nano composites were synthesized using solution intercalation, polymerization and melting methods; and PAA/boronoxide nano composites using solution intercalation method. Furthermore, rheological properties of nano composites synthesed according to melting method were also studied. Nano composites were characterized by XRD, FTIR-ATR, DTA/TG, BET, SEM, and TEM instruments. The effects of filler material amount, solvent types and mediating reagent on the thermal stability of polymers were investigated. In addition, the rheological properties of PMMA/boronoxide nano composites synthesized by melting method were investigated using High Pressure Capillary Rheometer. XRD analysis showed that boronoxide was dispersed in polymer matrix; FTIR-ATR that there were interactions with boronoxide between PAA and PMMA; and TEM that boronoxide particles had spherical structure, and dispersed in nano sized dimension in polymer matrix; the thermal stability of polymers was increased with the adding of boronoxide in polymer matrix; the decomposition mechanism of PAA was changed. From rheological measurements, it was found that PMMA and PMMA/boronoxide nano composites exhibited non-Newtonian, pseudo-plastic, shear thinning behavior under all experimental conditions.

Keywords: boronoxide, polymer, nanocomposite, rheology, characterization

Procedia PDF Downloads 434
871 Spatio-Temporal Risk Analysis of Cancer to Assessed Environmental Exposures in Coimbatore, India

Authors: Janani Selvaraj, M. Prashanthi Devi, P. B. Harathi

Abstract:

Epidemiologic studies conducted over several decades have provided evidence to suggest that long-term exposure to elevated ambient levels of particulate air pollution is associated with increased mortality. Air quality risk management is significant in developing countries and it highlights the need to understand the role of ecologic covariates in the association between air pollution and mortality. Several new methods show promise in exploring the geographical distribution of disease and the identification of high risk areas using epidemiological maps. However, the addition of the temporal attribute would further give us an in depth idea of the disease burden with respect to forecasting measures. In recent years, new methods developed in the reanalysis were useful for exploring the spatial structure of the data and the impact of spatial autocorrelation on estimates of risk associated with exposure to air pollution. Based on this, our present study aims to explore the spatial and temporal distribution of the lung cancer cases in the Coimbatore district of Tamil Nadu in relation to air pollution risk areas. A spatio temporal moving average method was computed using the CrimeStat software and visualized in ArcGIS 10.1 to document the spatio temporal movement of the disease in the study region. The random walk analysis performed showed the progress of the peak cancer incidences in the intersection regions of the Coimbatore North and South taluks that include major commercial and residential regions like Gandhipuram, Peelamedu, Ganapathy, etc. Our study shows evidence that daily exposure to high air pollutant concentration zones may lead to the risk of lung cancer. The observations from the present study will be useful in delineating high risk zones of environmental exposure that contribute to the increase of cancer among daily commuters. Through our study we suggest that spatially resolved exposure models in relevant time frames will produce higher risks zones rather than solely on statistical theory about the impact of measurement error and the empirical findings.

Keywords: air pollution, cancer, spatio-temporal analysis, India

Procedia PDF Downloads 515