Search results for: language learning model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 23778

Search results for: language learning model

13158 English Loanwords in Nigerian Languages: Sociolinguistic Survey

Authors: Surajo Ladan

Abstract:

English has been in existence in Nigeria since colonial period. The advent of English in Nigeria has caused a lot of linguistic changes in Nigerian languages especially among the educated elites and to some extent, even the ordinary people were not spared from this phenomenon. This scenario has generated a linguistic situation which culminated into the creation of Nigerian Pidgin that are conglomeration of English and other Nigerian languages. English has infiltrated the Nigerian languages to a point that a typical Nigerian can hardly talk without code-switching or using one English word or the other. The existence of English loanwords in Nigerian languages has taken another dimension in this scientific and technological age. Most of scientific and technological inventions are products of English language which are virtually adopted into the languages with phonological, morphological, and sometimes semantic variations. This paper is of the view that there should be a re-think and agitation from Nigerians to protect their languages from the linguistic genocide of English which are invariably facing extinction.

Keywords: linguistic change, loanword, phenomenon, pidgin

Procedia PDF Downloads 844
13157 A Comparative Evaluation of Finite Difference Methods for the Extended Boussinesq Equations and Application to Tsunamis Modelling

Authors: Aurore Cauquis, Philippe Heinrich, Mario Ricchiuto, Audrey Gailler

Abstract:

In this talk, we look for an accurate time scheme to model the propagation of waves. Several numerical schemes have been developed to solve the extended weakly nonlinear weakly dispersive Boussinesq Equations. The temporal schemes used are two Lax-Wendroff schemes, second or third order accurate, two Runge-Kutta schemes of second and third order and a simplified third order accurate Lax-Wendroff scheme. Spatial derivatives are evaluated with fourth order accuracy. The numerical model is applied to two monodimensional benchmarks on a flat bottom. It is also applied to the simulation of the Algerian tsunami generated by a Mw=6 seism on the 18th March 2021. The tsunami propagation was highly dispersive and propagated across the Mediterranean Sea. We study here the effects of the order of temporal discretization on the accuracy of the results and on the time of computation.

Keywords: numerical analysis, tsunami propagation, water wave, boussinesq equations

Procedia PDF Downloads 231
13156 Optimization of Leaching Properties of a Low-Grade Copper Ore Using Central Composite Design (CCD)

Authors: Lawrence Koech, Hilary Rutto, Olga Mothibedi

Abstract:

Worldwide demand for copper has led to intensive search for methods of extraction and recovery of copper from different sources. The study investigates the leaching properties of a low-grade copper ore by optimizing the leaching variables using response surface methodology. The effects of key parameters, i.e., temperature, solid to liquid ratio, stirring speed and pH, on the leaching rate constant was investigated using a pH stat apparatus. A Central Composite Design (CCD) of experiments was used to develop a quadratic model which specifically correlates the leaching variables and the rate constant. The results indicated that the model is in good agreement with the experimental data with a correlation coefficient (R2) of 0.93. The temperature and solid to liquid ratio were found to have the most substantial influence on the leaching rate constant. The optimum operating conditions for copper leaching from the ore were identified as temperature at 65C, solid to liquid ratio at 1.625 and stirring speed of 325 rpm which yielded an average leaching efficiency of 93.16%.

Keywords: copper, leaching, CCD, rate constant

Procedia PDF Downloads 236
13155 Determination of Friction and Damping Coefficients of Folded Cover Mechanism Deployed by Torsion Springs

Authors: I. Yilmaz, O. Taga, F. Kosar, O. Keles

Abstract:

In this study, friction and damping coefficients of folded cover mechanism were obtained in accordance with experimental studies and data. Friction and damping coefficients are the most important inputs to accomplish a mechanism analysis. Friction and damping are two objects that change the time of deployment of mechanisms and their dynamic behaviors. Though recommended friction coefficient values exist in literature, damping is differentiating feature according to mechanic systems. So the damping coefficient should be obtained from mechanism test outputs. In this study, the folded cover mechanism use torsion springs for deploying covers that are formerly close folded position. Torsion springs provide folded covers with desirable deploying time according to variable environmental conditions. To verify all design revisions with system tests will be so costly so that some decisions are taken in accordance with numerical methods. In this study, there are two folded covers required to deploy simultaneously. Scotch-yoke and crank-rod mechanisms were combined to deploy folded covers simultaneously. The mechanism was unlocked with a pyrotechnic bolt onto scotch-yoke disc. When pyrotechnic bolt was exploded, torsion springs provided rotational movement for mechanism. Quick motion camera was recording dynamic behaviors of system during deployment case. Dynamic model of mechanism was modeled as rigid body with Adams MBD (multi body dynamics) then torque values provided by torsion springs were used as an input. A well-advised range of friction and damping coefficients were defined in Adams DOE (design of experiment) then a large number of analyses were performed until deployment time of folded covers run in with test data observed in record of quick motion camera, thus the deployment time of mechanism and dynamic behaviors were obtained. Same mechanism was tested with different torsion springs and torque values then outputs were compared with numerical models. According to comparison, it was understood that friction and damping coefficients obtained in this study can be used safely when studying on folded objects required to deploy simultaneously. In addition to model generated with Adams as rigid body the finite element model of folded mechanism was generated with Abaqus then the outputs of rigid body model and finite element model was compared. Finally, the reasonable solutions were suggested about different outputs of these solution methods.

Keywords: damping, friction, pyro-technic, scotch-yoke

Procedia PDF Downloads 317
13154 BIM Modeling of Site and Existing Buildings: Case Study of ESTP Paris Campus

Authors: Rita Sassine, Yassine Hassani, Mohamad Al Omari, Stéphanie Guibert

Abstract:

Building Information Modelling (BIM) is the process of creating, managing, and centralizing information during the building lifecycle. BIM can be used all over a construction project, from the initiation phase to the planning and execution phases to the maintenance and lifecycle management phase. For existing buildings, BIM can be used for specific applications such as lifecycle management. However, most of the existing buildings don’t have a BIM model. Creating a compatible BIM for existing buildings is very challenging. It requires special equipment for data capturing and efforts to convert these data into a BIM model. The main difficulties for such projects are to define the data needed, the level of development (LOD), and the methodology to be adopted. In addition to managing information for an existing building, studying the impact of the built environment is a challenging topic. So, integrating the existing terrain that surrounds buildings into the digital model is essential to be able to make several simulations as flood simulation, energy simulation, etc. Making a replication of the physical model and updating its information in real-time to make its Digital Twin (DT) is very important. The Digital Terrain Model (DTM) represents the ground surface of the terrain by a set of discrete points with unique height values over 2D points based on reference surface (e.g., mean sea level, geoid, and ellipsoid). In addition, information related to the type of pavement materials, types of vegetation and heights and damaged surfaces can be integrated. Our aim in this study is to define the methodology to be used in order to provide a 3D BIM model for the site and the existing building based on the case study of “Ecole Spéciale des Travaux Publiques (ESTP Paris)” school of engineering campus. The property is located on a hilly site of 5 hectares and is composed of more than 20 buildings with a total area of 32 000 square meters and a height between 50 and 68 meters. In this work, the campus precise levelling grid according to the NGF-IGN69 altimetric system and the grid control points are computed according to (Réseau Gédésique Français) RGF93 – Lambert 93 french system with different methods: (i) Land topographic surveying methods using robotic total station, (ii) GNSS (Global Network Satellite sytem) levelling grid with NRTK (Network Real Time Kinematic) mode, (iii) Point clouds generated by laser scanning. These technologies allow the computation of multiple building parameters such as boundary limits, the number of floors, the floors georeferencing, the georeferencing of the 4 base corners of each building, etc. Once the entry data are identified, the digital model of each building is done. The DTM is also modeled. The process of altimetric determination is complex and requires efforts in order to collect and analyze multiple data formats. Since many technologies can be used to produce digital models, different file formats such as DraWinG (DWG), LASer (LAS), Comma-separated values (CSV), Industry Foundation Classes (IFC) and ReViT (RVT) will be generated. Checking the interoperability between BIM models is very important. In this work, all models are linked together and shared on 3DEXPERIENCE collaborative platform.

Keywords: building information modeling, digital terrain model, existing buildings, interoperability

Procedia PDF Downloads 105
13153 Gravitational Frequency Shifts for Photons and Particles

Authors: Jing-Gang Xie

Abstract:

The research, in this case, considers the integration of the Quantum Field Theory and the General Relativity Theory. As two successful models in explaining behaviors of particles, they are incompatible since they work at different masses and scales of energy, with the evidence that regards the description of black holes and universe formation. It is so considering previous efforts in merging the two theories, including the likes of the String Theory, Quantum Gravity models, and others. In a bid to prove an actionable experiment, the paper’s approach starts with the derivations of the existing theories at present. It goes on to test the derivations by applying the same initial assumptions, coupled with several deviations. The resulting equations get similar results to those of classical Newton model, quantum mechanics, and general relativity as long as conditions are normal. However, outcomes are different when conditions are extreme, specifically with no breakdowns even for less than Schwarzschild radius, or at Planck length cases. Even so, it proves the possibilities of integrating the two theories.

Keywords: general relativity theory, particles, photons, Quantum Gravity Model, gravitational frequency shift

Procedia PDF Downloads 354
13152 Biaxial Buckling of Single Layer Graphene Sheet Based on Nonlocal Plate Model and Molecular Dynamics Simulation

Authors: R. Pilafkan, M. Kaffash Irzarahimi, S. F. Asbaghian Namin

Abstract:

The biaxial buckling behavior of single-layered graphene sheets (SLGSs) is studied in the present work. To consider the size-effects in the analysis, Eringen’s nonlocal elasticity equations are incorporated into classical plate theory (CLPT). A Generalized Differential Quadrature Method (GDQM) approach is utilized and numerical solutions for the critical buckling loads are obtained. Then, molecular dynamics (MD) simulations are performed for a series of zigzag SLGSs with different side-lengths and with various boundary conditions, the results of which are matched with those obtained by the nonlocal plate model to numerical the appropriate values of nonlocal parameter relevant to each type of boundary conditions.

Keywords: biaxial buckling, single-layered graphene sheets, nonlocal elasticity, molecular dynamics simulation, classical plate theory

Procedia PDF Downloads 272
13151 Influencing Factors to Mandatory versus Non-Mandatory E-Government Services Adoption in India: An Empirical Study

Authors: Rajiv Kumar, Amit Sachan, Arindam Mukherjee

Abstract:

Government agencies around the world, including India, are incorporating digital technologies and processes into their day-to-day operations to become more efficient. Despite low internet penetration (around 34.8% of total population) in India, Government of India has made some public services mandatory to access online (e.g. passport, tax filing).This is insisting citizens to access mandatory public services online. However, due to digital divide, all citizens do not have equal access to internet. In light of this, it is an interesting topic to explore how citizens are able to access mandatory online public services. It is important to understand how citizens are adopting these mandatory e-government services and how the adoption behavior of these mandatory e-government services is different or similar to adoption behavior of non-mandatory e-government services. The purpose of this research is to investigate the factors that influence adoption of mandatory and non-mandatory e-government services in India. A quantitative technique is employed in this study. A conceptual model has been proposed by integrating the influencing factors to adopt e-government services from previous studies. The proposed conceptual model highlights a comprehensive set of potential factors influencing the adoption of e-government services. The proposed model has been validated by keeping in view the local context of Indian society. Online and paper based survey was administered, collected data was analyzed and results have been discussed. A total of 463 valid responses were received and further the responses were analyzed. The research reveals that the influencing factors to adopt e-government services are not same for both mandatory and non-mandatory e-government services. There are some factors that influence adoption of both mandatory and non-mandatory e-government services but there are some which are relevant for either of mandatory and non-mandatory e-government services. The research findings may help government or concerned agencies in successfully implementing e-government services.

Keywords: adoption, e-government, India, mandatory, non-mandatory

Procedia PDF Downloads 310
13150 Corporate Social Responsibility and Dividend Policy

Authors: Mohammed Benlemlih

Abstract:

Using a sample of 22,839 US firm-year observations over the 1991-2012 period, we find that high CSR firms pay more dividends than low CSR firms. The analysis of individual components of CSR provides strong support for this main finding: five of the six individual dimensions are also associated with high dividend payout. When analyzing the stability of dividend payout, our results show that socially irresponsible firms adjust dividends more rapidly than socially responsible firms do: dividend payout is more stable in high CSR firms. Additional results suggest that firms involved in two controversial activities -the military and alcohol - are associated with low dividend payouts. These findings are robust to alternative assumptions and model specifications, alternative measures of dividend, additional control, and several approaches to address endogeneity. Overall, our results are consistent with the expectation that high CSR firms may use dividend policy to manage the agency problems related to overinvestment in CSR.

Keywords: corporate social responsibility, dividend policy, Lintner model, agency theory, signaling theory, dividend stability

Procedia PDF Downloads 256
13149 Extending the AOP Joinpoint Model for Memory and Type Safety

Authors: Amjad Nusayr

Abstract:

Software security is a general term used to any type of software architecture or model in which security aspects are incorporated in this architecture. These aspects are not part of the main logic of the underlying program. Software security can be achieved using a combination of approaches, including but not limited to secure software designs, third part component validation, and secure coding practices. Memory safety is one feature in software security where we ensure that any object in memory has a valid pointer or a reference with a valid type. Aspect-Oriented Programming (AOP) is a paradigm that is concerned with capturing the cross-cutting concerns in code development. AOP is generally used for common cross-cutting concerns like logging and DB transaction managing. In this paper, we introduce the concepts that enable AOP to be used for the purpose of memory and type safety. We also present ideas for extending AOP in software security practices.

Keywords: aspect oriented programming, programming languages, software security, memory and type safety

Procedia PDF Downloads 122
13148 Integrated Design in Additive Manufacturing Based on Design for Manufacturing

Authors: E. Asadollahi-Yazdi, J. Gardan, P. Lafon

Abstract:

Nowadays, manufactures are encountered with production of different version of products due to quality, cost and time constraints. On the other hand, Additive Manufacturing (AM) as a production method based on CAD model disrupts the design and manufacturing cycle with new parameters. To consider these issues, the researchers utilized Design For Manufacturing (DFM) approach for AM but until now there is no integrated approach for design and manufacturing of product through the AM. So, this paper aims to provide a general methodology for managing the different production issues, as well as, support the interoperability with AM process and different Product Life Cycle Management tools. The problem is that the models of System Engineering which is used for managing complex systems cannot support the product evolution and its impact on the product life cycle. Therefore, it seems necessary to provide a general methodology for managing the product’s diversities which is created by using AM. This methodology must consider manufacture and assembly during product design as early as possible in the design stage. The latest approach of DFM, as a methodology to analyze the system comprehensively, integrates manufacturing constraints in the numerical model in upstream. So, DFM for AM is used to import the characteristics of AM into the design and manufacturing process of a hybrid product to manage the criteria coming from AM. Also, the research presents an integrated design method in order to take into account the knowledge of layers manufacturing technologies. For this purpose, the interface model based on the skin and skeleton concepts is provided, the usage and manufacturing skins are used to show the functional surface of the product. Also, the material flow and link between the skins are demonstrated by usage and manufacturing skeletons. Therefore, this integrated approach is a helpful methodology for designer and manufacturer in different decisions like material and process selection as well as, evaluation of product manufacturability.

Keywords: additive manufacturing, 3D printing, design for manufacturing, integrated design, interoperability

Procedia PDF Downloads 310
13147 The Influence of Family of Origin on Children: A Comprehensive Model and Implications for Positive Psychology and Psychotherapy

Authors: Meichen He, Xuan Yang

Abstract:

Background: In the field of psychotherapy, the role of the family of origin is of utmost importance. Over the past few decades, both individual-oriented and family-oriented approaches to child therapy have shown moderate success in reducing children's psychological and behavioral issues. Objective: However, in exploring how the family of origin influences individuals, it has been noted that there is a lack of comprehensive measurement indicators and an absence of an exact model to assess the impact of the family of origin on individual development. Therefore, this study aims to develop a model based on a literature review regarding the influence of the family of origin on children. Specifically, it will examine the effects of factors such as education level, economic status, maternal age, family integration, family violence, marital conflict, parental substance abuse, and alcohol consumption on children's self-confidence and life satisfaction. Through this research, we aim to further investigate the impact of the family of origin on children and provide directions for future research in positive psychology and psychotherapy. Methods: This study will employ a literature review methodology to gather and analyze relevant research articles on the influence of the family of origin on children. Subsequently, we will conduct quantitative analyses to establish a comprehensive model explaining how family of origin factors affect children's psychological and behavioral outcomes. Findings: the research has revealed that family of origin factors, including education level, economic status, maternal age, family integration, family violence, marital conflict, parental drug and alcohol consumption, have an impact on children's self-confidence and life satisfaction. These factors can affect children's psychological well-being and happiness through various pathways. Implications: The results of this study will contribute to a better understanding of the influence of the family of origin on children and provide valuable directions for future research in positive psychology and psychotherapy. This research will enhance awareness of children's psychological well-being and lay the foundation for improving psychotherapeutic methods.

Keywords: family of origion, positive psychology, developmental psychology, family education, social psychology, educational psychology

Procedia PDF Downloads 143
13146 How Information Sharing Can Improve Organizational Performance?

Authors: Syed Abdul Rehman Khan

Abstract:

In today’s world, information sharing plays a vital role in successful operations of supply chain; and boost to the profitability of the organizations (end-to-end supply chains). Many researches have been completed over the role of information sharing in supply chain. In this research article, we will investigate the ‘how information sharing can boost profitability & productivity of the organization; for this purpose, we have developed one conceptual model and check to that model through collected data from companies. We sent questionnaire to 369 companies; and will filled form received from 172 firms and the response rate was almost 47%. For the data analysis, we have used Regression in (SPSS software) In the research findings, our all hypothesis has been accepted significantly and due to the information sharing between suppliers and manufacturers ‘quality of material and timely delivery’ increase and also ‘collaboration & trust’ will become more stronger and these all factors will lead to the company’s profitability directly and in-directly. But unfortunately, companies could not avail the all fruitful benefits of information sharing due to the fear of ‘compromise confidentiality or leakage of information’.

Keywords: collaboration, information sharing, risk factor, timely delivery

Procedia PDF Downloads 407
13145 Nurturing Scientific Minds: Enhancing Scientific Thinking in Children (Ages 5-9) through Experiential Learning in Kids Science Labs (STEM)

Authors: Aliya K. Salahova

Abstract:

Scientific thinking, characterized by purposeful knowledge-seeking and the harmonization of theory and facts, holds a crucial role in preparing young minds for an increasingly complex and technologically advanced world. This abstract presents a research study aimed at fostering scientific thinking in early childhood, focusing on children aged 5 to 9 years, through experiential learning in Kids Science Labs (STEM). The study utilized a longitudinal exploration design, spanning 240 weeks from September 2018 to April 2023, to evaluate the effectiveness of the Kids Science Labs program in developing scientific thinking skills. Participants in the research comprised 72 children drawn from local schools and community organizations. Through a formative psychology-pedagogical experiment, the experimental group engaged in weekly STEM activities carefully designed to stimulate scientific thinking, while the control group participated in daily art classes for comparison. To assess the scientific thinking abilities of the participants, a registration table with evaluation criteria was developed. This table included indicators such as depth of questioning, resource utilization in research, logical reasoning in hypotheses, procedural accuracy in experiments, and reflection on research processes. The data analysis revealed dynamic fluctuations in the number of children at different levels of scientific thinking proficiency. While the development was not uniform across all participants, a main leading factor emerged, indicating that the Kids Science Labs program and formative experiment exerted a positive impact on enhancing scientific thinking skills in children within this age range. The study's findings support the hypothesis that systematic implementation of STEM activities effectively promotes and nurtures scientific thinking in children aged 5-9 years. Enriching education with a specially planned STEM program, tailoring scientific activities to children's psychological development, and implementing well-planned diagnostic and corrective measures emerged as essential pedagogical conditions for enhancing scientific thinking abilities in this age group. The results highlight the significant and positive impact of the systematic-activity approach in developing scientific thinking, leading to notable progress and growth in children's scientific thinking abilities over time. These findings have promising implications for educators and researchers, emphasizing the importance of incorporating STEM activities into educational curricula to foster scientific thinking from an early age. This study contributes valuable insights to the field of science education and underscores the potential of STEM-based interventions in shaping the future scientific minds of young children.

Keywords: Scientific thinking, education, STEM, intervention, Psychology, Pedagogy, collaborative learning, longitudinal study

Procedia PDF Downloads 57
13144 The Impact of the Use of Some Multiple Intelligence-Based Teaching Strategies on Developing Moral Intelligence and Inferential Jurisprudential Thinking among Secondary School Female Students in Saudi Arabia

Authors: Sameerah A. Al-Hariri Al-Zahrani

Abstract:

The current study aims at getting acquainted with the impact of the use of some multiple intelligence-based teaching strategies on developing moral intelligence and inferential jurisprudential thinking among secondary school female students. The study has endeavored to answer the following questions: What is the impact of the use of some multiple intelligence-based teaching strategies on developing inferential jurisprudential thinking and moral intelligence among first-year secondary school female students? In the frame of this main research question, the study seeks to answer the following sub-questions: (i) What are the inferential jurisprudential thinking skills among first-year secondary school female students? (ii) What are the components of moral intelligence among first year secondary school female students? (iii) What is the impact of the use of some multiple intelligence‐based teaching strategies (such as the strategies of analyzing values, modeling, Socratic discussion, collaborative learning, peer collaboration, collective stories, building emotional moments, role play, one-minute observation) on moral intelligence among first-year secondary school female students? (iv) What is the impact of the use of some multiple intelligence‐based teaching strategies (such as the strategies of analyzing values, modeling, Socratic discussion, collaborative learning, peer collaboration, collective stories, building emotional moments, role play, one-minute observation) on developing the capacity for inferential jurisprudential thinking of juristic rules among first-year secondary school female students? The study has used the descriptive-analytical methodology in surveying, analyzing, and reviewing the literature on previous studies in order to benefit from them in building the tools of the study and the materials of experimental treatment. The study has also used the experimental method to study the impact of the independent variable (multiple intelligence strategies) on the two dependent variables (moral intelligence and inferential jurisprudential thinking) in first-year secondary school female students’ learning. The sample of the study is made up of 70 female students that have been divided into two groups: an experimental group consisting of 35 students who have been taught through multiple intelligence strategies, and a control group consisting of the other 35 students who have been taught normally. The two tools of the study (inferential jurisprudential thinking test and moral intelligence scale) have been implemented on the two groups as a pre-test. The female researcher taught the experimental group and implemented the two tools of the study. After the experiment, which lasted eight weeks, was over, the study showed the following results: (i) The existence of significant statistical differences (0.05) between the mean average of the control group and that of the experimental group in the inferential jurisprudential thinking test (recognition of the evidence of jurisprudential rule, recognition of the motive for the jurisprudential rule, jurisprudential inferencing, analogical jurisprudence) in favor of the experimental group. (ii) The existence of significant statistical differences (0.05) between the mean average of the control group and that of the experimental group in the components of the moral intelligence scale (sympathy, conscience, moral wisdom, tolerance, justice, respect) in favor of the experimental group. The study has, thus, demonstrated the impact of the use of some multiple intelligence-based teaching strategies on developing moral intelligence and inferential jurisprudential thinking.

Keywords: moral intelligence, teaching, inferential jurisprudential thinking, secondary school

Procedia PDF Downloads 158
13143 Statistical Assessment of Models for Determination of Soil–Water Characteristic Curves of Sand Soils

Authors: S. J. Matlan, M. Mukhlisin, M. R. Taha

Abstract:

Characterization of the engineering behavior of unsaturated soil is dependent on the soil-water characteristic curve (SWCC), a graphical representation of the relationship between water content or degree of saturation and soil suction. A reasonable description of the SWCC is thus important for the accurate prediction of unsaturated soil parameters. The measurement procedures for determining the SWCC, however, are difficult, expensive, and time-consuming. During the past few decades, researchers have laid a major focus on developing empirical equations for predicting the SWCC, with a large number of empirical models suggested. One of the most crucial questions is how precisely existing equations can represent the SWCC. As different models have different ranges of capability, it is essential to evaluate the precision of the SWCC models used for each particular soil type for better SWCC estimation. It is expected that better estimation of SWCC would be achieved via a thorough statistical analysis of its distribution within a particular soil class. With this in view, a statistical analysis was conducted in order to evaluate the reliability of the SWCC prediction models against laboratory measurement. Optimization techniques were used to obtain the best-fit of the model parameters in four forms of SWCC equation, using laboratory data for relatively coarse-textured (i.e., sandy) soil. The four most prominent SWCCs were evaluated and computed for each sample. The result shows that the Brooks and Corey model is the most consistent in describing the SWCC for sand soil type. The Brooks and Corey model prediction also exhibit compatibility with samples ranging from low to high soil water content in which subjected to the samples that evaluated in this study.

Keywords: soil-water characteristic curve (SWCC), statistical analysis, unsaturated soil, geotechnical engineering

Procedia PDF Downloads 331
13142 Reactive Transport Modeling in Carbonate Rocks: A Single Pore Model

Authors: Priyanka Agrawal, Janou Koskamp, Amir Raoof, Mariette Wolthers

Abstract:

Calcite is the main mineral found in carbonate rocks, which form significant hydrocarbon reservoirs and subsurface repositories for CO2 sequestration. The injected CO2 mixes with the reservoir fluid and disturbs the geochemical equilibrium, triggering calcite dissolution. Different combinations of fluid chemistry and injection rate may therefore result in different evolution of porosity, permeability and dissolution patterns. To model the changes in porosity and permeability Kozeny-Carman equation K∝〖(∅)〗^n is used, where K is permeability and ∅ is porosity. The value of n is mostly based on experimental data or pore network models. In pore network models, this derivation is based on accuracy of relation used for conductivity and pore volume change. In fact, at a single pore scale, this relationship is the result of the pore shape development due to dissolution. We have prepared a new reactive transport model for a single pore which simulates the complex chemical reaction of carbonic-acid induced calcite dissolution and subsequent pore-geometry evolution at a single pore scale. We use COMSOL Multiphysics package 5.3 for the simulation. COMSOL utilizes the arbitary-Lagrangian Eulerian (ALE) method for the free-moving domain boundary. We examined the effect of flow rate on the evolution of single pore shape profiles due to calcite dissolution. We used three flow rates to cover diffusion dominated and advection-dominated transport regimes. The fluid in diffusion dominated flow (Pe number 0.037 and 0.37) becomes less reactive along the pore length and thus produced non-uniform pore shapes. However, for the advection-dominated flow (Pe number 3.75), the fast velocity of the fluid keeps the fluid relatively more reactive towards the end of the pore length, thus yielding uniform pore shape. Different pore shapes in terms of inlet opening vs overall pore opening will have an impact on the relation between changing volumes and conductivity. We have related the shape of pore with the Pe number which controls the transport regimes. For every Pe number, we have derived the relation between conductivity and porosity. These relations will be used in the pore network model to get the porosity and permeability variation.

Keywords: single pore, reactive transport, calcite system, moving boundary

Procedia PDF Downloads 366
13141 Reading and Writing Memories in Artificial and Human Reasoning

Authors: Ian O'Loughlin

Abstract:

Memory networks aim to integrate some of the recent successes in machine learning with a dynamic memory base that can be updated and deployed in artificial reasoning tasks. These models involve training networks to identify, update, and operate over stored elements in a large memory array in order, for example, to ably perform question and answer tasks parsing real-world and simulated discourses. This family of approaches still faces numerous challenges: the performance of these network models in simulated domains remains considerably better than in open, real-world domains, wide-context cues remain elusive in parsing words and sentences, and even moderately complex sentence structures remain problematic. This innovation, employing an array of stored and updatable ‘memory’ elements over which the system operates as it parses text input and develops responses to questions, is a compelling one for at least two reasons: first, it addresses one of the difficulties that standard machine learning techniques face, by providing a way to store a large bank of facts, offering a way forward for the kinds of long-term reasoning that, for example, recurrent neural networks trained on a corpus have difficulty performing. Second, the addition of a stored long-term memory component in artificial reasoning seems psychologically plausible; human reasoning appears replete with invocations of long-term memory, and the stored but dynamic elements in the arrays of memory networks are deeply reminiscent of the way that human memory is readily and often characterized. However, this apparent psychological plausibility is belied by a recent turn in the study of human memory in cognitive science. In recent years, the very notion that there is a stored element which enables remembering, however dynamic or reconstructive it may be, has come under deep suspicion. In the wake of constructive memory studies, amnesia and impairment studies, and studies of implicit memory—as well as following considerations from the cognitive neuroscience of memory and conceptual analyses from the philosophy of mind and cognitive science—researchers are now rejecting storage and retrieval, even in principle, and instead seeking and developing models of human memory wherein plasticity and dynamics are the rule rather than the exception. In these models, storage is entirely avoided by modeling memory using a recurrent neural network designed to fit a preconceived energy function that attains zero values only for desired memory patterns, so that these patterns are the sole stable equilibrium points in the attractor network. So although the array of long-term memory elements in memory networks seem psychologically appropriate for reasoning systems, they may actually be incurring difficulties that are theoretically analogous to those that older, storage-based models of human memory have demonstrated. The kind of emergent stability found in the attractor network models more closely fits our best understanding of human long-term memory than do the memory network arrays, despite appearances to the contrary.

Keywords: artificial reasoning, human memory, machine learning, neural networks

Procedia PDF Downloads 264
13140 Simulation and Modeling of High Voltage Pulse Transformer

Authors: Zahra Emami, H. Reza Mesgarzade, A. Morad Ghorbami, S. Reza Motahari

Abstract:

This paper presents a method for calculation of parasitic elements consisting of leakage inductance and parasitic capacitance in a high voltage pulse transformer. The parasitic elements of pulse transformers significantly influence the resulting pulse shape of a power modulator system. In order to prevent the effects on the pulse shape before constructing the transformer an electrical model is needed. The technique procedures for computing these elements are based on finite element analysis. The finite element model of pulse transformer is created using software "Ansys Maxwell 3D". Finally, the transformer parasitic elements is calculated and compared with the value obtained from the actual test and pulse modulator is simulated and results is compared with actual test of pulse modulator. The results obtained are very similar with the test values.

Keywords: pulse transformer, simulation, modeling, Maxwell 3D, modulator

Procedia PDF Downloads 451
13139 Increasing Holism: Qualitative, Cross-Dimensional Study of Contemporary Innovation Processes

Authors: Sampo Tukiainen, Jukka Mattila, Niina Erkama, Erkki Ormala

Abstract:

During the past decade, calls for more holistic and integrative organizational innovation research have been increasingly voiced. On the one hand, from the theoretical perspective, the reason for this has been the tendency in contemporary innovation studies to focus on disciplinary subfields, often leading to challenges in integrating theories in meaningful ways. For example, we find that during the past three decades the innovation research has evolved into an academic field consisting of several independent research streams, such as studies on organizational learning, project management, and top management teams, to name but a few. The innovation research has also proliferated according to different dimensions of innovation, such as sources, drivers, forms, and the nature of innovation. On the other hand, from the practical perspective the rationale has been the need to develop understanding of the solving of complex, interdisciplinary issues and problems in contemporary and future societies and organizations. Therefore, for advancing theorizing, as well as the practical applicability of organizational innovation research, we acknowledge the need for more integrative and holistic perspectives and approaches. We contribute to addressing this challenge by developing a ‘box transcendent’ perspective to examine interlinkages in and across four key dimensions of organizational innovation processes, which traditionally have been studied in separate research streams. Building on an in-depth, qualitative analysis of 123 interviews of CTOs (or equivalent) and CEOs in top innovative Finnish companies as well as three in-depth case studies, both as part of an EU-level interview study of more than 700 companies, we specify interlinkages in and between i) strategic management, ii) innovation management, iii) implementation and organization, and iv) commercialization, in innovation processes. We contribute to the existing innovation research in multiple ways. Firstly, we develop a cross-dimensional, ‘box transcendent’ conceptual model at the level of organizational innovation process. Secondly, this modeling enables us to extend existing theorizing by allowing us to distinguish specific cross-dimensional innovation ‘profiles’ in two different company categories: large multinational corporations and SMEs. Finally, from the more practical perspective, we consider the implications of such innovation ‘profiles’ for the societal and institutional, policy-making development.

Keywords: holistic research, innovation management, innovation studies, organizational innovation

Procedia PDF Downloads 319
13138 A Multi-Cluster Enterprise Framework for Evolution of Knowledge System among Enterprises, Governments and Research Institutions

Authors: Sohail Ahmed, Ke Xing

Abstract:

This research theoretically explored the evolution mechanism of enterprise technological innovation capability system (ETICS) from the perspective of complex adaptive systems (CAS). Starting from CAS theory, this study proposed an analytical framework for ETICS, its concepts and theory by integrating CAS methodology into the management of technological innovation capability of enterprises and discusses how to use the principles of complexity to analyze the composition, evolution and realization of the technological innovation capabilities in complex dynamic environment. This paper introduces the concept and interaction of multi-agent, the theoretical background of CAS and summarizes the sources of technological innovation, the elements of each subject and the main clusters of adaptive interactions and innovation activities. The concept of multi-agents is applied through the linkages of enterprises, research institutions and government agencies with the leading enterprises in industrial settings. The study was exploratory based on CAS theory. Theoretical model is built by considering technological and innovation literature from foundational to state of the art projects of technological enterprises. On this basis, the theoretical model is developed to measure the evolution mechanism of enterprise technological innovation capability system. This paper concludes that the main characteristics for evolution in technological systems are based on enterprise’s research and development personal, investments in technological processes and innovation resources are responsible for the evolution of enterprise technological innovation performance. The research specifically enriched the application process of technological innovation in institutional networks related to enterprises.

Keywords: complex adaptive system, echo model, enterprise knowledge system, research institutions, multi-agents.

Procedia PDF Downloads 60
13137 The Role of Urban Development Patterns for Mitigating Extreme Urban Heat: The Case Study of Doha, Qatar

Authors: Yasuyo Makido, Vivek Shandas, David J. Sailor, M. Salim Ferwati

Abstract:

Mitigating extreme urban heat is challenging in a desert climate such as Doha, Qatar, since outdoor daytime temperature area often too high for the human body to tolerate. Recent studies demonstrate that cities in arid and semiarid areas can exhibit ‘urban cool islands’ - urban areas that are cooler than the surrounding desert. However, the variation of temperatures as a result of the time of day and factors leading to temperature change remain at the question. To address these questions, we examined the spatial and temporal variation of air temperature in Doha, Qatar by conducting multiple vehicle-base local temperature observations. We also employed three statistical approaches to model surface temperatures using relevant predictors: (1) Ordinary Least Squares, (2) Regression Tree Analysis and (3) Random Forest for three time periods. Although the most important determinant factors varied by day and time, distance to the coast was the significant determinant at midday. A 70%/30% holdout method was used to create a testing dataset to validate the results through Pearson’s correlation coefficient. The Pearson’s analysis suggests that the Random Forest model more accurately predicts the surface temperatures than the other methods. We conclude with recommendations about the types of development patterns that show the greatest potential for reducing extreme heat in air climates.

Keywords: desert cities, tree-structure regression model, urban cool Island, vehicle temperature traverse

Procedia PDF Downloads 386
13136 Cellular Automata Model for Car Accidents at a Signalized Intersection

Authors: Rachid Marzoug, Noureddine Lakouari, Beatriz Castillo Téllez, Margarita Castillo Téllez, Gerardo Alberto Mejía Pérez

Abstract:

This paper developed a two-lane cellular automata model to explain the relationship between car accidents at a signalized intersection and traffic-related parameters. It is found that the increase of the lane-changing probability P?ₕ? increases the risk of accidents, besides, the inflow α and the probability of accidents Pₐ? exhibit a nonlinear relationship. Furthermore, depending on the inflow, Pₐ? exhibits three different phases. The transition from phase I to phase II is of first (second) order when P?ₕ?=0 (P?ₕ?>0). However, the system exhibits a second (first) order transition from phase II to phase III when P?ₕ?=0 (P?ₕ?>0). In addition, when the inflow is not very high, the green light length of one road should be increased to improve road safety. Finally, simulation results show that the traffic at the intersection is safer adopting symmetric lane-changing rules than asymmetric ones.

Keywords: two-lane intersection, accidents, fatality risk, lane-changing, phase transition

Procedia PDF Downloads 212
13135 Modelling of Composite Steel and Concrete Beam with the Lightweight Concrete Slab

Authors: Veronika Přivřelová

Abstract:

Well-designed composite steel and concrete structures highlight the good material properties and lower the deficiencies of steel and concrete, in particular they make use of high tensile strength of steel and high stiffness of concrete. The most common composite steel and concrete structure is a simply supported beam, which concrete slab transferring the slab load to a beam is connected to the steel cross-section. The aim of this paper is to find the most adequate numerical model of a simply supported composite beam with the cross-sectional and material parameters based on the results of a processed parametric study and numerical analysis. The paper also evaluates the suitability of using compact concrete with the lightweight aggregates for composite steel and concrete beams. The most adequate numerical model will be used in the resent future to compare the results of laboratory tests.

Keywords: composite beams, high-performance concrete, high-strength steel, lightweight concrete slab, modeling

Procedia PDF Downloads 401
13134 Investigation of the Use of Surface-Modified Waste Orange Pulp for the Adsorption of Remazol Black B

Authors: Ceren Karaman, Onur Karaman

Abstract:

The adsorption of Remazol Black B (RBB), an anionic dye, onto dried orange pulp (DOP) adsorbent prepared by only drying and by treating with cetyltrimetylammonium bromide (CTAB), a cationic surfactant, surface-modified orange pulp (SMOP) was studied in a stirred batch experiments system at 25°C. The adsorption of RBB on each adsorbent as a function of surfactant dosage, initial pH of the solution and initial dye concentration was investigated. The optimum amount of CTAB was found to be 25g/l. For RBB adsorption studies, while working pH value for the DOP adsorbent system was determined as 2.0, it was observed that this value shifted to 8.0 when the 25 g/l CTAB treated-orange pulp (SMOP) adsorbent was used. It was obtained that the adsorption rate and capacity increased to a certain value, and the adsorption efficiency decreased with increasing initial RBB concentration for both DOP and SMOP adsorbents at pH 2.0 and pH 8.0. While the highest adsorption capacity for DOP was determined as 62.4 mg/g at pH 2.0, and as 325.0 mg/g for SMOP at pH 8.0. As a result, it can be said that permanent cationic coating of the adsorbent surface by CTAB surfactant shifted the working pH from 2.0 to 8.0 and it increased the dye adsorption rate and capacity of orange pulp much more significantly at pH 8.0. The equilibrium RBB adsorption data on each adsorbent were best described by the Langmuir isotherm model. The adsorption kinetics of RBB on each adsorbent followed a pseudo-second-order model. Moreover, the intraparticle diffusion model was used to describe the kinetic data. It was found that diffusion is not the only rate controlling step. The adsorbent was characterized by the Brunauer–Emmett–Teller (BET) analysis, Fourier-transform-infrared (FTIR) spectroscopy, and scanning-electron-microscopy (SEM). The mechanism for the adsorption of RBB on the SMOP may include hydrophobic interaction, van der Waals interaction, stacking and electrostatic interaction.

Keywords: adsorption, Cetyltrimethylammonium Bromide (CTAB), orange pulp, Remazol Black B (RBB), surface modification

Procedia PDF Downloads 243
13133 A Study on Game Theory Approaches for Wireless Sensor Networks

Authors: M. Shoukath Ali, Rajendra Prasad Singh

Abstract:

Game Theory approaches and their application in improving the performance of Wireless Sensor Networks (WSNs) are discussed in this paper. The mathematical modeling and analysis of WSNs may have low success rate due to the complexity of topology, modeling, link quality, etc. However, Game Theory is a field, which can efficiently use to analyze the WSNs. Game Theory is related to applied mathematics that describes and analyzes interactive decision situations. Game theory has the ability to model independent, individual decision makers whose actions affect the surrounding decision makers. The outcome of complex interactions among rational entities can be predicted by a set of analytical tools. However, the rationality demands a stringent observance to a strategy based on measured of perceived results. Researchers are adopting game theory approaches to model and analyze leading wireless communication networking issues, which includes QoS, power control, resource sharing, etc.

Keywords: wireless sensor network, game theory, cooperative game theory, non-cooperative game theory

Procedia PDF Downloads 420
13132 Determining the Sources of Sediment at Different Areas of the Catchment: A Case Study of Welbedacht Reservoir, South Africa

Authors: D. T. Chabalala, J. M. Ndambuki, M. F. Ilunga

Abstract:

Sedimentation includes the processes of erosion, transportation, deposition, and the compaction of sediment. Sedimentation in reservoir results in a decrease in water storage capacity, downstream problems involving aggregation and degradation, blockage of the intake, and change in water quality. A study was conducted in Caledon River catchment in the upstream of Welbedacht Reservoir located in the South Eastern part of Free State province, South Africa. The aim of this research was to investigate and develop a model for an Integrated Catchment Modelling of Sedimentation processes and management for the Welbedacht reservoir. Revised Universal Soil Loss Equation (RUSLE) was applied to determine sources of sediment at different areas of the catchment. The model has been also used to determine the impact of changes from management practice on erosion generation. The results revealed that the main sources of sediment in the watershed are cultivated land (273 ton per hectare), built up and forest (103.3 ton per hectare), and grassland, degraded land, mining and quarry (3.9, 9.8 and 5.3 ton per hectare) respectively. After application of soil conservation practices to developed Revised Universal Soil Loss Equation model, the results revealed that the total average annual soil loss in the catchment decreased by 76% and sediment yield from cultivated land decreased by 75%, while the built up and forest area decreased by 42% and 99% respectively. Thus, results of this study will be used by government departments in order to develop sustainable policies.

Keywords: Welbedacht reservoir, sedimentation, RUSLE, Caledon River

Procedia PDF Downloads 191
13131 Hydrological Response of the Glacierised Catchment: Himalayan Perspective

Authors: Sonu Khanal, Mandira Shrestha

Abstract:

Snow and Glaciers are the largest dependable reserved sources of water for the river system originating from the Himalayas so an accurate estimate of the volume of water contained in the snowpack and the rate of release of water from snow and glaciers are, therefore, needed for efficient management of the water resources. This research assess the fusion of energy exchanges between the snowpack, air above and soil below according to mass and energy balance which makes it apposite than the models using simple temperature index for the snow and glacier melt computation. UEBGrid a Distributed energy based model is used to calculate the melt which is then routed by Geo-SFM. The model robustness is maintained by incorporating the albedo generated from the Landsat-7 ETM images on a seasonal basis for the year 2002-2003 and substrate map derived from TM. The Substrate file includes predominantly the 4 major thematic layers viz Snow, clean ice, Glaciers and Barren land. This approach makes use of CPC RFE-2 and MERRA gridded data sets as the source of precipitation and climatic variables. The subsequent model run for the year between 2002-2008 shows a total annual melt of 17.15 meter is generate from the Marshyangdi Basin of which 71% is contributed by the glaciers , 18% by the rain and rest being from the snow melt. The albedo file is decisive in governing the melt dynamics as 30% increase in the generated surface albedo results in the 10% decrease in the simulated discharge. The melt routed with the land cover and soil variables using Geo-SFM shows Nash-Sutcliffe Efficiency of 0.60 with observed discharge for the study period.

Keywords: Glacier, Glacier melt, Snowmelt, Energy balance

Procedia PDF Downloads 450
13130 Arabic Text Classification: Review Study

Authors: M. Hijazi, A. Zeki, A. Ismail

Abstract:

An enormous amount of valuable human knowledge is preserved in documents. The rapid growth in the number of machine-readable documents for public or private access requires the use of automatic text classification. Text classification can be defined as assigning or structuring documents into a defined set of classes known in advance. Arabic text classification methods have emerged as a natural result of the existence of a massive amount of varied textual information written in the Arabic language on the web. This paper presents a review on the published researches of Arabic Text Classification using classical data representation, Bag of words (BoW), and using conceptual data representation based on semantic resources such as Arabic WordNet and Wikipedia.

Keywords: Arabic text classification, Arabic WordNet, bag of words, conceptual representation, semantic relations

Procedia PDF Downloads 416
13129 Experimental Study and Numerical Modelling of Failure of Rocks Typical for Kuzbass Coal Basin

Authors: Mikhail O. Eremin

Abstract:

Present work is devoted to experimental study and numerical modelling of failure of rocks typical for Kuzbass coal basin (Russia). The main goal was to define strength and deformation characteristics of rocks on the base of uniaxial compression and three-point bending loadings and then to build a mathematical model of failure process for both types of loading. Depending on particular physical-mechanical characteristics typical rocks of Kuzbass coal basin (sandstones, siltstones, mudstones, etc. of different series – Kolchuginsk, Tarbagansk, Balohonsk) manifest brittle and quasi-brittle character of failure. The strength characteristics for both tension and compression are found. Other characteristics are also found from the experiment or taken from literature reviews. On the base of obtained characteristics and structure (obtained from microscopy) the mathematical and structural models are built and numerical modelling of failure under different types of loading is carried out. Effective characteristics obtained from modelling and character of failure correspond to experiment and thus, the mathematical model was verified. An Instron 1185 machine was used to carry out the experiments. Mathematical model includes fundamental conservation laws of solid mechanics – mass, impulse, energy. Each rock has a sufficiently anisotropic structure, however, each crystallite might be considered as isotropic and then a whole rock model has a quasi-isotropic structure. This idea gives an opportunity to use the Hooke’s law inside of each crystallite and thus explicitly accounting for the anisotropy of rocks and the stress-strain state at loading. Inelastic behavior is described in frameworks of two different models: von Mises yield criterion and modified Drucker-Prager yield criterion. The damage accumulation theory is also implemented in order to describe a failure process. Obtained effective characteristics of rocks are used then for modelling of rock mass evolution when mining is carried out both by an open-pit or underground opening.

Keywords: damage accumulation, Drucker-Prager yield criterion, failure, mathematical modelling, three-point bending, uniaxial compression

Procedia PDF Downloads 169