Search results for: equation modeling methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19330

Search results for: equation modeling methods

17500 Impact of Contemporary Performance Measurement System and Organization Justice on Academic Staff Work Performance

Authors: Amizawati Mohd Amir, Ruhanita Maelah, Zaidi Mohd Noor

Abstract:

As part of the Malaysia Higher Institutions' Strategic Plan in promoting high-quality research and education, the Ministry of Higher Education has introduced various instrument to assess the universities performance. The aims are that university will produce more commercially-oriented research and continue to contribute in producing professional workforce for domestic and foreign needs. Yet the spirit of the success lies in the commitment of university particularly the academic staff to translate the vision into reality. For that reason, the element of fairness and justice in assessing individual academic staff performance is crucial to promote directly linked between university and individual work goals. Focusing on public research universities (RUs) in Malaysia, this study observes at the issue through the practice of university contemporary performance measurement system. Accordingly management control theory has conceptualized that contemporary performance measurement consisting of three dimension namely strategic, comprehensive and dynamic building upon equity theory, the relationships between contemporary performance measurement system and organizational justice and in turn the effect on academic staff work performance are tested based on online survey data administered on 365 academic staff from public RUs, which were analyzed using statistics analysis SPSS and Equation Structure Modeling. The findings validated the presence of strategic, comprehensive and dynamic in the contemporary performance measurement system. The empirical evidence also indicated that contemporary performance measure and procedural justice are significantly associated with work performance but not for distributive justice. Furthermore, procedural justice does mediate the relationship between contemporary performance measurement and academic staff work performance. Evidently, this study provides evidence on the importance of perceptions of justice towards influencing academic staff work performance. This finding may be a fruitful input in the setting up academic staff performance assessment policy.

Keywords: comprehensive, dynamic, distributive justice, contemporary performance measurement system, strategic, procedure justice, work performance

Procedia PDF Downloads 390
17499 2D-Modeling with Lego Mindstorms

Authors: Miroslav Popelka, Jakub Nozicka

Abstract:

The whole work is based on possibility to use Lego Mindstorms robotics systems to reduce costs. Lego Mindstorms consists of a wide variety of hardware components necessary to simulate, programme and test of robotics systems in practice. To programme algorithm, which simulates space using the ultrasonic sensor, was used development environment supplied with kit. Software Matlab was used to render values afterwards they were measured by ultrasonic sensor. The algorithm created for this paper uses theoretical knowledge from area of signal processing. Data being processed by algorithm are collected by ultrasonic sensor that scans 2D space in front of it. Ultrasonic sensor is placed on moving arm of robot which provides horizontal moving of sensor. Vertical movement of sensor is provided by wheel drive. The robot follows map in order to get correct positioning of measured data. Based on discovered facts it is possible to consider Lego Mindstorm for low-cost and capable kit for real-time modelling.

Keywords: LEGO Mindstorms, ultrasonic sensor, real-time modeling, 2D object, low-cost robotics systems, sensors, Matlab, EV3 Home Edition Software

Procedia PDF Downloads 455
17498 Continuous Functions Modeling with Artificial Neural Network: An Improvement Technique to Feed the Input-Output Mapping

Authors: A. Belayadi, A. Mougari, L. Ait-Gougam, F. Mekideche-Chafa

Abstract:

The artificial neural network is one of the interesting techniques that have been advantageously used to deal with modeling problems. In this study, the computing with artificial neural network (CANN) is proposed. The model is applied to modulate the information processing of one-dimensional task. We aim to integrate a new method which is based on a new coding approach of generating the input-output mapping. The latter is based on increasing the neuron unit in the last layer. Accordingly, to show the efficiency of the approach under study, a comparison is made between the proposed method of generating the input-output set and the conventional method. The results illustrated that the increasing of the neuron units, in the last layer, allows to find the optimal network’s parameters that fit with the mapping data. Moreover, it permits to decrease the training time, during the computation process, which avoids the use of computers with high memory usage.

Keywords: neural network computing, continuous functions generating the input-output mapping, decreasing the training time, machines with big memories

Procedia PDF Downloads 265
17497 Modeling the Risk Perception of Pedestrians Using a Nested Logit Structure

Authors: Babak Mirbaha, Mahmoud Saffarzadeh, Atieh Asgari Toorzani

Abstract:

Pedestrians are the most vulnerable road users since they do not have a protective shell. One of the most common collisions for them is pedestrian-vehicle at intersections. In order to develop appropriate countermeasures to improve safety for them, researches have to be conducted to identify the factors that affect the risk of getting involved in such collisions. More specifically, this study investigates factors such as the influence of walking alone or having a baby while crossing the street, the observable age of pedestrian, the speed of pedestrians and the speed of approaching vehicles on risk perception of pedestrians. A nested logit model was used for modeling the behavioral structure of pedestrians. The results show that the presence of more lanes at intersections and not being alone especially having a baby while crossing, decrease the probability of taking a risk among pedestrians. Also, it seems that teenagers show more risky behaviors in crossing the street in comparison to other age groups. Also, the speed of approaching vehicles was considered significant. The probability of risk taking among pedestrians decreases by increasing the speed of approaching vehicle in both the first and the second lanes of crossings.

Keywords: pedestrians, intersection, nested logit, risk

Procedia PDF Downloads 170
17496 Perspectives of Computational Modeling in Sanskrit Lexicons

Authors: Baldev Ram Khandoliyan, Ram Kishor

Abstract:

India has a classical tradition of Sanskrit Lexicons. Research work has been done on the study of Indian lexicography. India has seen amazing strides in Information and Communication Technology (ICT) applications for Indian languages in general and for Sanskrit in particular. Since Machine Translation from Sanskrit to other Indian languages is often the desired goal, traditional Sanskrit lexicography has attracted a lot of attention from the ICT and Computational Linguistics community. From Nighaŋţu and Nirukta to Amarakośa and Medinīkośa, Sanskrit owns a rich history of lexicography. As these kośas do not follow the same typology or standard in the selection and arrangement of the words and the information related to them, several types of Kośa-styles have emerged in this tradition. The model of a grammar given by Aṣṭādhyāyī is well appreciated by Indian and western linguists and grammarians. But the different models provided by lexicographic tradition also have importance. The general usefulness of Sanskrit traditional Kośas is well discussed by some scholars. That is most of the matter made available in the text. Some also have discussed the good arrangement of lexica. This paper aims to discuss some more use of the different models of Sanskrit lexicography especially focusing on its computational modeling and its use in different computational operations.

Keywords: computational lexicography, Sanskrit Lexicons, nighanṭu, kośa, Amarkosa

Procedia PDF Downloads 149
17495 Forensic Methods Used for the Verification of the Authenticity of Prints

Authors: Olivia Rybak-Karkosz

Abstract:

This paper aims to present the results of scientific research on methods of forging art prints and their elements, such as signature or provenance and forensic science methods that might be used to verify their authenticity. In the last decades, the art market has observed significant interest in purchasing prints. They are considered an economical alternative to paintings and a considerable investment. However, the authenticity of an art print is difficult to establish as similar visual effects might be achieved with drawings or xerox. The latter is easy to make using a home printer. They are then offered on flea markets or internet auctions as genuine prints. This probable ease of forgery and, at the same time, the difficulty of distinguishing art print techniques were the main reasons why this research was undertaken. A lack of scientific methods dedicated to disclosing a forgery encouraged the author to verify the possibility of using forensic science's methods known and used in other fields of expertise. This research methodology consisted of completing representative forgery samples collected in selected museums based in Poland and a few in Germany and Austria. That allowed the author to present a typology of methods used to forge art prints. Given that one of the most famous graphic design examples is bills and securities, it seems only appropriate to propose in print verification the usage of methods of detecting counterfeit currency. These methods contain an examination of ink, paper, and watermarks. On prints, additionally, signatures and imprints of stamps, etc., are forged as well. So the examination should be completed with handwriting examination and forensic sphragistics. The paper contains a stipulation to conduct a complex analysis of authenticity with the participation of an art restorer, art historian, and forensic expert as head of this team.

Keywords: art forgery, examination of an artwork, handwriting analysis, prints

Procedia PDF Downloads 111
17494 Geochemical Modeling of Mineralogical Changes in Rock and Concrete in Interaction with Groundwater

Authors: Barbora Svechova, Monika Licbinska

Abstract:

Geochemical modeling of mineralogical changes of various materials in contact with an aqueous solution is an important tool for predicting the processes and development of given materials at the site. The modeling focused on the mutual interaction of groundwater at the contact with the rock mass and its subsequent influence on concrete structures. The studied locality is located in Slovakia in the area of the Liptov Basin, which is a significant inter-mountain lowland, which is bordered on the north and south by the core mountains belt of the Tatras, where in the center the crystalline rises to the surface accompanied by Mesozoic cover. Groundwater in the area is bound to structures with complicated geological structures. From the hydrogeological point of view, it is an environment with a crack-fracture character. The area is characterized by a shallow surface circulation of groundwater without a significant collector structure, and from a chemical point of view, groundwater in the area has been classified as calcium bicarbonate with a high content of CO2 and SO4 ions. According to the European standard EN 206-1, these are waters with medium aggression towards the concrete. Three rock samples were taken from the area. Based on petrographic and mineralogical research, they were evaluated as calcareous shale, micritic limestone and crystalline shale. These three rock samples were placed in demineralized water for one month and the change in the chemical composition of the water was monitored. During the solution-rock interaction there was an increase in the concentrations of all major ions, except nitrates. There was an increase in concentration after a week, but at the end of the experiment, the concentration was lower than the initial value. Another experiment was the interaction of groundwater from the studied locality with a concrete structure. The concrete sample was also left in the water for 1 month. The results of the experiment confirmed the assumption of a reduction in the concentrations of calcium and bicarbonate ions in water due to the precipitation of amorphous forms of CaCO3 on the surface of the sample.Vice versa, it was surprising to increase the concentration of sulphates, sodium, iron and aluminum due to the leaching of concrete. Chemical analyzes from these experiments were performed in the PHREEQc program, which calculated the probability of the formation of amorphous forms of minerals. From the results of chemical analyses and hydrochemical modeling of water collected in situ and water from experiments, it was found: groundwater at the site is unsaturated and shows moderate aggression towards reinforced concrete structures according to EN 206-1a, which will affect the homogeneity and integrity of concrete structures; from the rocks in the given area, Ca, Na, Fe, HCO3 and SO4. Unsaturated waters will dissolve everything as soon as they come into contact with the solid matrix. The speed of this process then depends on the physicochemical parameters of the environment (T, ORP, p, n, water retention time in the environment, etc.).

Keywords: geochemical modeling, concrete , dissolution , PHREEQc

Procedia PDF Downloads 183
17493 Comparison Approach for Wind Resource Assessment to Determine Most Precise Approach

Authors: Tasir Khan, Ishfaq Ahmad, Yejuan Wang, Muhammad Salam

Abstract:

Distribution models of the wind speed data are essential to assess the potential wind speed energy because it decreases the uncertainty to estimate wind energy output. Therefore, before performing a detailed potential energy analysis, the precise distribution model for data relating to wind speed must be found. In this research, material from numerous criteria goodness-of-fits, such as Kolmogorov Simonov, Anderson Darling statistics, Chi-Square, root mean square error (RMSE), AIC and BIC were combined finally to determine the wind speed of the best-fitted distribution. The suggested method collectively makes each criterion. This method was useful in a circumstance to fitting 14 distribution models statistically with the data of wind speed together at four sites in Pakistan. The consequences show that this method provides the best source for selecting the most suitable wind speed statistical distribution. Also, the graphical representation is consistent with the analytical results. This research presents three estimation methods that can be used to calculate the different distributions used to estimate the wind. In the suggested MLM, MOM, and MLE the third-order moment used in the wind energy formula is a key function because it makes an important contribution to the precise estimate of wind energy. In order to prove the presence of the suggested MOM, it was compared with well-known estimation methods, such as the method of linear moment, and maximum likelihood estimate. In the relative analysis, given to several goodness-of-fit, the presentation of the considered techniques is estimated on the actual wind speed evaluated in different time periods. The results obtained show that MOM certainly provides a more precise estimation than other familiar approaches in terms of estimating wind energy based on the fourteen distributions. Therefore, MOM can be used as a better technique for assessing wind energy.

Keywords: wind-speed modeling, goodness of fit, maximum likelihood method, linear moment

Procedia PDF Downloads 71
17492 A Deep Learning Approach to Subsection Identification in Electronic Health Records

Authors: Nitin Shravan, Sudarsun Santhiappan, B. Sivaselvan

Abstract:

Subsection identification, in the context of Electronic Health Records (EHRs), is identifying the important sections for down-stream tasks like auto-coding. In this work, we classify the text present in EHRs according to their information, using machine learning and deep learning techniques. We initially describe briefly about the problem and formulate it as a text classification problem. Then, we discuss upon the methods from the literature. We try two approaches - traditional feature extraction based machine learning methods and deep learning methods. Through experiments on a private dataset, we establish that the deep learning methods perform better than the feature extraction based Machine Learning Models.

Keywords: deep learning, machine learning, semantic clinical classification, subsection identification, text classification

Procedia PDF Downloads 195
17491 Roadmaps as a Tool of Innovation Management: System View

Authors: Matich Lyubov

Abstract:

Today roadmaps are becoming commonly used tools for detecting and designing a desired future for companies, states and the international community. The growing popularity of this method puts tasks such as identifying basic roadmapping principles, creation of concepts and determination of the characteristics of the use of roadmaps depending on the objectives as well as restrictions and opportunities specific to the study area on the agenda. However, the system approach, e.g. the elements which are recognized to be major for high-quality roadmapping, remains one of the main fields for improving the methodology and practice of their development as limited research was devoted to the detailed analysis of the roadmaps from the view of system approach. Therefore, this article is an attempt to examine roadmaps from the view of the system analysis, to compare areas, where, as a rule, roadmaps and systems analysis are considered the most effective tools. To compare the structure and composition of roadmaps and systems models the identification of common points between construction stages of roadmaps and system modeling and the determination of future directions for research roadmaps from a systems perspective are of special importance.

Keywords: technology roadmap, roadmapping, systems analysis, system modeling, innovation management

Procedia PDF Downloads 293
17490 Stating Best Commercialization Method: An Unanswered Question from Scholars and Practitioners

Authors: Saheed A. Gbadegeshin

Abstract:

Commercialization method is a means to make inventions available at the market for final consumption. It is described as an important tool for keeping business enterprises sustainable and improving national economic growth. Thus, there are several scholarly publications on it, either presenting or testing different methods for commercialization. However, young entrepreneurs, technologists and scientists would like to know the best method to commercialize their innovations. Then, this question arises: What is the best commercialization method? To answer the question, a systematic literature review was conducted, and practitioners were interviewed. The literary results revealed that there are many methods but new methods are needed to improve commercialization especially during these times of economic crisis and political uncertainty. Similarly, the empirical results showed there are several methods, but the best method is the one that reduces costs, reduces the risks associated with uncertainty, and improves customer participation and acceptability. Therefore, it was concluded that new commercialization method is essential for today's high technologies and a method was presented.

Keywords: commercialization method, technology, knowledge, intellectual property, innovation, invention

Procedia PDF Downloads 325
17489 Assessment of the Implementation of Recommended Teaching and Evaluation Methods of NCE Arabic Language Curriculum in Colleges of Education in North Western Nigeria

Authors: Hamzat Shittu Atunnise

Abstract:

This study on Assessment of the Implementation of Recommended Teaching and Evaluation Methods of the Nigeria Certificate in Education (NCE) Arabic Language Curriculum in Colleges of Education in North Western Nigeria was conducted with four objectives, four research questions and four null hypotheses. Descriptive survey design was used and the multistage sampling procedure adopted. Frequency count and percentage were used to answer research questions and chi-square was used to test all the null hypotheses at an Alpha 0.05 level of significance. Two hundred and ninety one subjects were drawn as sample. Questionnaires were used for data collection. The Context, Input, Process and Product (CIPP) model of evaluation was employed. The study findings indicated that: there were no significant difference in the perceptions of lecturers and students from Federal and State Colleges of Education on the following: extent of which lecturers employ appropriate methods in teaching the language and extent of which recommended evaluation methods are utilized for the implementation of Arabic Curriculum. Based on these findings, it was recommended among other things that: lecturers should adopt teaching methodologies that promote interactive learning; Governments should ensure that information and communication technology facilities are made available and usable in all Colleges of Education; Lecturers should vary their evaluation methods because other methods of evaluation can meet and surpass the level of learning and understanding which essay type questions are believed to create and that language labs should be used in teaching Arabic in Colleges of Education because comprehensive language learning is possible through both classroom and language lab teaching.

Keywords: assessment, arabic language, curriculum, methods of teaching, evaluation methods, NCE

Procedia PDF Downloads 44
17488 Consumer’s Behavioral Responses to Corporate Social Responsibility Marketing: Mediating Impact of Customer Trust, Emotions, Brand Image, and Brand Attitude

Authors: Yasir Ali Soomro

Abstract:

Companies that demonstrate corporate social responsibilities (CSR) are more likely to withstand any downturn or crises because of the trust built with stakeholders. Many firms are utilizing CSR marketing to improve the interactions with their various stakeholders, mainly the consumers. Most previous research on CSR has focused on the impact of CSR on customer responses and behaviors toward a company. As online food ordering and grocery shopping remains inevitable. This study will investigate structural relationships among consumer positive emotions (CPE) and negative emotions (CNE), Corporate Reputation (CR), Customer Trust (CT), Brand Image (BI), and Brand attitude (BA) on behavioral outcomes such as Online purchase intention (OPI) and Word of mouth (WOM) in retail grocery and food restaurants setting. Hierarchy of Effects Model will be used as theoretical, conceptual framework. The model describes three stages of consumer behavior: (i) cognitive, (ii) affective, and (iii) conative. The study will apply a quantitative method to test the hypotheses; a self-developed questionnaire with non-probability sampling will be utilized to collect data from 500 consumers belonging to generation X, Y, and Z residing in KSA. The study will contribute by providing empirical evidence to support the link between CSR and customer affective and conative experiences in Saudi Arabia. The theoretical contribution of this study will be empirically tested comprehensive model where CPE, CNE, CR, CT, BI, and BA act as mediating variables between the perceived CSR & Online purchase intention (OPI) and Word of mouth (WOM). Further, the study will add more to how the emotional/ psychological process mediates in the CSR literature, especially in the Middle Eastern context. The proposed study will also explain the effect of perceived CSR marketing initiatives directly and indirectly on customer behavioral responses.

Keywords: corporate social responsibility, corporate reputation, consumer emotions, loyalty, online purchase intention, word-of-mouth, structural equation modeling

Procedia PDF Downloads 71
17487 Comparing the Experimental Thermal Conductivity Results Using Transient Methods

Authors: Sofia Mylona, Dale Hume

Abstract:

The main scope of this work is to compare the experimental thermal conductivity results of fluids between devices using transient techniques. A range of different liquids within a range of viscosities was measured with two or more devices, and the results were compared between the different methods and the reference equations wherever it was available. The liquids selected are the most commonly used in academic or industrial laboratories to calibrate their thermal conductivity instruments having a variety of thermal conductivity, viscosity, and density. Three transient methods (Transient Hot Wire, Transient Plane Source, and Transient Line Source) were compared for the thermal conductivity measurements taken by using them. These methods have been chosen as the most accurate and because they all follow the same idea; as a function of the logarithm of time, the thermal conductivity is calculated from the slope of a plot of sensor temperature rise. For all measurements, the selected temperature range was at the atmospheric level from 10 to 40 ° C. Our results are coming with an agreement with the objections of several scientists over the reliability of the results of a few popular devices. The observation was surprising that the device used in many laboratories for fast measurements of liquid thermal conductivity display deviations of 500 percent which can be very poorly reproduced.

Keywords: accurate data, liquids, thermal conductivity, transient methods.

Procedia PDF Downloads 141
17486 Vector-Based Analysis in Cognitive Linguistics

Authors: Chuluundorj Begz

Abstract:

This paper presents the dynamic, psycho-cognitive approach to study of human verbal thinking on the basis of typologically different languages /as a Mongolian, English and Russian/. Topological equivalence in verbal communication serves as a basis of Universality of mental structures and therefore deep structures. Mechanism of verbal thinking consisted at the deep level of basic concepts, rules for integration and classification, neural networks of vocabulary. In neuro cognitive study of language, neural architecture and neuro psychological mechanism of verbal cognition are basis of a vector-based modeling. Verbal perception and interpretation of the infinite set of meanings and propositions in mental continuum can be modeled by applying tensor methods. Euclidean and non-Euclidean spaces are applied for a description of human semantic vocabulary and high order structures.

Keywords: Euclidean spaces, isomorphism and homomorphism, mental lexicon, mental mapping, semantic memory, verbal cognition, vector space

Procedia PDF Downloads 506
17485 Finite Element Modeling of the Mechanical Behavior of Municipal Solid Waste Incineration Bottom Ash with the Mohr-Coulomb Model

Authors: Le Ngoc Hung, Abriak Nor Edine, Binetruy Christophe, Benzerzour Mahfoud, Shahrour Isam, Patrice Rivard

Abstract:

Bottom ash from Municipal Solid Waste Incineration (MSWI) can be viewed as a typical granular material because these industrial by-products result from the incineration of various domestic wastes. MSWI bottom ashes are mainly used in road engineering in substitution of the traditional natural aggregates. As the characterization of their mechanical behavior is essential in order to use them, specific studies have been led over the past few years. In the first part of this paper, the mechanical behavior of MSWI bottom ash is studied with triaxial tests. After analysis of the experiment results, the simulation of triaxial tests is carried out by using the software package CESAR-LCPC. As the first approach in modeling of this new class material, the Mohr-Coulomb model was chosen to describe the evolution of material under the influence of external mechanical actions.

Keywords: bottom ash, granular material, triaxial test, mechanical behavior, simulation, Mohr-Coulomb model, CESAR-LCPC

Procedia PDF Downloads 296
17484 Selection the Most Suitable Method for DNA Extraction from Muscle of Iran's Canned Tuna by Comparison of Different DNA Extraction Methods

Authors: Marjan Heidarzadeh

Abstract:

High quality and purity of DNA isolated from canned tuna is essential for species identification. In this study, the efficiency of five different methods for DNA extraction was compared. Method of national standard in Iran, the CTAB precipitation method, Wizard DNA Clean Up system, Nucleospin and GenomicPrep were employed. DNA was extracted from two different canned tuna in brine and oil of the same tuna species. Three samples of each type of product were analyzed with the different methods. The quantity and quality of DNA extracted was evaluated using the 260 nm absorbance and ratio A260/A280 by spectrophotometer picodrop. Results showed that the DNA extraction from canned tuna preserved in different liquid media could be optimized by employing a specific DNA extraction method in each case. Best results were obtained with CTAB method for canned tuna in oil and with Wizard method for canned tuna in brine.

Keywords: canned tuna PCR, DNA, DNA extraction methods, species identification

Procedia PDF Downloads 639
17483 Media Literacy: Information and Communication Technology Impact on Teaching and Learning Methods in Albanian Education System

Authors: Loreta Axhami

Abstract:

Media literacy in the digital age emerges not only as a set of skills to generate true knowledge and information but also as a pedagogy methodology, as a kind of educational philosophy. In addition to such innovations as information integration and communication technologies, media infrastructures, and web usage in the educational system, media literacy enables the change in the learning methods, pedagogy, teaching programs, and school curriculum itself. In this framework, this study focuses on ICT's impact on teaching and learning methods and the degree they are reflected in the Albanian education system. The study is based on a combination of quantitative and qualitative methods of scientific research. Referring to the study findings, it results that student’s limited access to the internet in school, focus on the hardcopy textbooks and the role of the teacher as the only or main source of knowledge and information are some of the main factors contributing to the implementation of authoritarian pedagogical methods in the Albanian education system. In these circumstances, the implementation of media literacy is recommended as an apt educational process for the 21st century, which requires a reconceptualization of textbooks as well as the application of modern teaching and learning methods by integrating information and communication technologies.

Keywords: authoritarian pedagogic model, education system, ICT, media literacy

Procedia PDF Downloads 119
17482 Characterization and Modelling of Groundwater Flow towards a Public Drinking Water Well Field: A Case Study of Ter Kamerenbos Well Field

Authors: Buruk Kitachew Wossenyeleh

Abstract:

Groundwater is the largest freshwater reservoir in the world. Like the other reservoirs of the hydrologic cycle, it is a finite resource. This study focused on the groundwater modeling of the Ter Kamerenbos well field to understand the groundwater flow system and the impact of different scenarios. The study area covers 68.9Km2 in the Brussels Capital Region and is situated in two river catchments, i.e., Zenne River and Woluwe Stream. The aquifer system has three layers, but in the modeling, they are considered as one layer due to their hydrogeological properties. The catchment aquifer system is replenished by direct recharge from rainfall. The groundwater recharge of the catchment is determined using the spatially distributed water balance model called WetSpass, and it varies annually from zero to 340mm. This groundwater recharge is used as the top boundary condition for the groundwater modeling of the study area. During the groundwater modeling using Processing MODFLOW, constant head boundary conditions are used in the north and south boundaries of the study area. For the east and west boundaries of the study area, head-dependent flow boundary conditions are used. The groundwater model is calibrated manually and automatically using observed hydraulic heads in 12 observation wells. The model performance evaluation showed that the root means the square error is 1.89m and that the NSE is 0.98. The head contour map of the simulated hydraulic heads indicates the flow direction in the catchment, mainly from the Woluwe to Zenne catchment. The simulated head in the study area varies from 13m to 78m. The higher hydraulic heads are found in the southwest of the study area, which has the forest as a land-use type. This calibrated model was run for the climate change scenario and well operation scenario. Climate change may cause the groundwater recharge to increase by 43% and decrease by 30% in 2100 from current conditions for the high and low climate change scenario, respectively. The groundwater head varies for a high climate change scenario from 13m to 82m, whereas for a low climate change scenario, it varies from 13m to 76m. If doubling of the pumping discharge assumed, the groundwater head varies from 13m to 76.5m. However, if the shutdown of the pumps is assumed, the head varies in the range of 13m to 79m. It is concluded that the groundwater model is done in a satisfactory way with some limitations, and the model output can be used to understand the aquifer system under steady-state conditions. Finally, some recommendations are made for the future use and improvement of the model.

Keywords: Ter Kamerenbos, groundwater modelling, WetSpass, climate change, well operation

Procedia PDF Downloads 141
17481 Three Dimensional Simulation of the Transient Modeling and Simulation of Different Gas Flows Velocity and Flow Distribution in Catalytic Converter with Porous Media

Authors: Amir Reza Radmanesh, Sina Farajzadeh Khosroshahi, Hani Sadr

Abstract:

The transient catalytic converter performance is governed by complex interactions between exhaust gas flow and the monolithic structure of the catalytic converter. Stringent emission regulations around the world necessitate the use of highly-efficient catalytic converters in vehicle exhaust systems. Computational fluid dynamics (CFD) is a powerful tool for calculating the flow field inside the catalytic converter. Radial velocity profiles, obtained by a commercial CFD code, present very good agreement with respective experimental results published in the literature. However the applicability of CFD for transient simulations is limited by the high CPU demands. In the present work, Geometric modeling ceramic monolith substrate is done with square shaped channel type of Catalytic converter and it is coated platinum and palladium. This example illustrates the effect of flow distribution on thermal response of a catalytic converter and different gas flow velocities, during the critical phase of catalytic converter warm up.

Keywords: catalytic converter, computational fluid dynamic, porous media, velocity distribution

Procedia PDF Downloads 842
17480 An Analytical Approach of Computational Complexity for the Method of Multifluid Modelling

Authors: A. K. Borah, A. K. Singh

Abstract:

In this paper we deal building blocks of the computer simulation of the multiphase flows. Whole simulation procedure can be viewed as two super procedures; The implementation of VOF method and the solution of Navier Stoke’s Equation. Moreover, a sequential code for a Navier Stoke’s solver has been studied.

Keywords: Bi-conjugate gradient stabilized (Bi-CGSTAB), ILUT function, krylov subspace, multifluid flows preconditioner, simple algorithm

Procedia PDF Downloads 510
17479 Utilizing Fiber-Based Modeling to Explore the Presence of a Soft Storey in Masonry-Infilled Reinforced Concrete Structures

Authors: Akram Khelaifia, Salah Guettala, Nesreddine Djafar Henni, Rachid Chebili

Abstract:

Recent seismic events have underscored the significant influence of masonry infill walls on the resilience of structures. The irregular positioning of these walls exacerbates their adverse effects, resulting in substantial material and human losses. Research and post-earthquake evaluations emphasize the necessity of considering infill walls in both the design and assessment phases. This study delves into the presence of soft stories in reinforced concrete structures with infill walls. Employing an approximate method relying on pushover analysis results, fiber-section-based macro-modeling is utilized to simulate the behavior of infill walls. The findings shed light on the presence of soft first stories, revealing a notable 240% enhancement in resistance for weak column—strong beam-designed frames due to infill walls. Conversely, the effect is more moderate at 38% for strong column—weak beam-designed frames. Interestingly, the uniform distribution of infill walls throughout the structure's height does not influence soft-story emergence in the same seismic zone, irrespective of column-beam strength. In regions with low seismic intensity, infill walls dissipate energy, resulting in consistent seismic behavior regardless of column configuration. Despite column strength, structures with open-ground stories remain vulnerable to soft first-story emergence, underscoring the crucial role of infill walls in reinforced concrete structural design.

Keywords: masonry infill walls, soft Storey, pushover analysis, fiber section, macro-modeling

Procedia PDF Downloads 44
17478 Scientific Recommender Systems Based on Neural Topic Model

Authors: Smail Boussaadi, Hassina Aliane

Abstract:

With the rapid growth of scientific literature, it is becoming increasingly challenging for researchers to keep up with the latest findings in their fields. Academic, professional networks play an essential role in connecting researchers and disseminating knowledge. To improve the user experience within these networks, we need effective article recommendation systems that provide personalized content.Current recommendation systems often rely on collaborative filtering or content-based techniques. However, these methods have limitations, such as the cold start problem and difficulty in capturing semantic relationships between articles. To overcome these challenges, we propose a new approach that combines BERTopic (Bidirectional Encoder Representations from Transformers), a state-of-the-art topic modeling technique, with community detection algorithms in a academic, professional network. Experiences confirm our performance expectations by showing good relevance and objectivity in the results.

Keywords: scientific articles, community detection, academic social network, recommender systems, neural topic model

Procedia PDF Downloads 80
17477 Relationship between Wave Velocities and Geo-Pressures in Shallow Libyan Carbonate Reservoir

Authors: Tarek Sabri Duzan

Abstract:

Knowledge of the magnitude of Geo-pressures (Pore, Fracture & Over-burden pressures) is vital especially during drilling, completions, stimulations, Enhance Oil Recovery. Many times problems, like lost circulation could have been avoided if techniques for calculating Geo-pressures had been employed in the well planning, mud weight plan, and casing design. In this paper, we focused on the relationships between Geo-pressures and wave velocities (P-Wave (Vp) and S-wave (Vs)) in shallow Libyan carbonate reservoir in the western part of the Sirte Basin (Dahra F-Area). The data used in this report was collected from four new wells recently drilled. Those wells were scattered throughout the interested reservoir as shown in figure-1. The data used in this work are bulk density, Formation Mult -Tester (FMT) results and Acoustic wave velocities. Furthermore, Eaton Method is the most common equation used in the world, therefore this equation has been used to calculate Fracture pressure for all wells using dynamic Poisson ratio calculated by using acoustic wave velocities, FMT results for pore pressure, Overburden pressure estimated by using bulk density. Upon data analysis, it has been found that there is a linear relationship between Geo-pressures (Pore, Fracture & Over-Burden pressures) and wave velocities ratio (Vp/Vs). However, the relationship was not clear in the high-pressure area, as shown in figure-10. Therefore, it is recommended to use the output relationship utilizing the new seismic data for shallow carbonate reservoir to predict the Geo-pressures for future oil operations. More data can be collected from the high-pressure zone to investigate more about this area.

Keywords: bulk density, formation mult-tester (FMT) results, acoustic wave, carbonate shalow reservoir, d/jfield velocities

Procedia PDF Downloads 274
17476 University-home Partnerships for Enhancing Students’ Career Adapting Responses: A Moderated-mediation Model

Authors: Yin Ma, Xun Wang, Kelsey Austin

Abstract:

Purpose – Building upon career construction theory and the conservation of resources theory, we developed a moderated mediation model to examine how the perceived university support impact students’ career adapting responses, namely, crystallization, exploration, decision and preparation, via the mediator career adaptability and moderator perceived parental support. Design/methodology/approach – The multi-stage sampling strategy was employed and survey data were collected. Structural equation modeling was used to perform the analysis. Findings – Perceived university support could directly promote students’ career adaptability, and promote three career adapting responses, namely, exploration, decision and preparation. It could also impact four career adapting responses via mediation effect of career adaptability. Its impact on students’ career adaptability can greatly increase when students’ receive parental related career support. Research limitations/implications – The cross-sectional design limits causal inference. Conducted in China, our findings should be cautiously interpreted in other countries due to cultural differences. Practical implications – University support is vital to students’ career adaptability and supports from parents can enhance this process. University-home collaboration is necessary to promote students’ career adapting responses. For students, seeking and utilizing as much supporting resources as possible is vital for their human resources development. On an organizational level, universities could benefit from our findings by introducing the practices which ask students to rate the career-related courses and encourage them to chat with parents regularly. Originality/ value – Using recently developed scale, current work contributes to the literature by investigating the impact of multiple contextual factors on students’ career adapting response. It also provide the empirical support for the role of human intervention in fostering career adapting responses.

Keywords: career adapability, university and parental support, China studies, sociology of education

Procedia PDF Downloads 46
17475 Microwave Dielectric Constant Measurements of Titanium Dioxide Using Five Mixture Equations

Authors: Jyh Sheen, Yong-Lin Wang

Abstract:

This research dedicates to find a different measurement procedure of microwave dielectric properties of ceramic materials with high dielectric constants. For the composite of ceramic dispersed in the polymer matrix, the dielectric constants of the composites with different concentrations can be obtained by various mixture equations. The other development of mixture rule is to calculate the permittivity of ceramic from measurements on composite. To do this, the analysis method and theoretical accuracy on six basic mixture laws derived from three basic particle shapes of ceramic fillers have been reported for dielectric constants of ceramic less than 40 at microwave frequency. Similar researches have been done for other well-known mixture rules. They have shown that both the physical curve matching with experimental results and low potential theory error are important to promote the calculation accuracy. Recently, a modified of mixture equation for high dielectric constant ceramics at microwave frequency has also been presented for strontium titanate (SrTiO3) which was selected from five more well known mixing rules and has shown a good accuracy for high dielectric constant measurements. However, it is still not clear the accuracy of this modified equation for other high dielectric constant materials. Therefore, the five more well known mixing rules are selected again to understand their application to other high dielectric constant ceramics. The other high dielectric constant ceramic, TiO2 with dielectric constant 100, was then chosen for this research. Their theoretical error equations are derived. In addition to the theoretical research, experimental measurements are always required. Titanium dioxide is an interesting ceramic for microwave applications. In this research, its powder is adopted as the filler material and polyethylene powder is like the matrix material. The dielectric constants of those ceramic-polyethylene composites with various compositions were measured at 10 GHz. The theoretical curves of the five published mixture equations are shown together with the measured results to understand the curve matching condition of each rule. Finally, based on the experimental observation and theoretical analysis, one of the five rules was selected and modified to a new powder mixture equation. This modified rule has show very good curve matching with the measurement data and low theoretical error. We can then calculate the dielectric constant of pure filler medium (titanium dioxide) by those mixing equations from the measured dielectric constants of composites. The accuracy on the estimating dielectric constant of pure ceramic by various mixture rules will be compared. This modified mixture rule has also shown good measurement accuracy on the dielectric constant of titanium dioxide ceramic. This study can be applied to the microwave dielectric properties measurements of other high dielectric constant ceramic materials in the future.

Keywords: microwave measurement, dielectric constant, mixture rules, composites

Procedia PDF Downloads 350
17474 Calculation of Pressure-Varying Langmuir and Brunauer-Emmett-Teller Isotherm Adsorption Parameters

Authors: Trevor C. Brown, David J. Miron

Abstract:

Gas-solid physical adsorption methods are central to the characterization and optimization of the effective surface area, pore size and porosity for applications such as heterogeneous catalysis, and gas separation and storage. Properties such as adsorption uptake, capacity, equilibrium constants and Gibbs free energy are dependent on the composition and structure of both the gas and the adsorbent. However, challenges remain, in accurately calculating these properties from experimental data. Gas adsorption experiments involve measuring the amounts of gas adsorbed over a range of pressures under isothermal conditions. Various constant-parameter models, such as Langmuir and Brunauer-Emmett-Teller (BET) theories are used to provide information on adsorbate and adsorbent properties from the isotherm data. These models typically do not provide accurate interpretations across the full range of pressures and temperatures. The Langmuir adsorption isotherm is a simple approximation for modelling equilibrium adsorption data and has been effective in estimating surface areas and catalytic rate laws, particularly for high surface area solids. The Langmuir isotherm assumes the systematic filling of identical adsorption sites to a monolayer coverage. The BET model is based on the Langmuir isotherm and allows for the formation of multiple layers. These additional layers do not interact with the first layer and the energetics are equal to the adsorbate as a bulk liquid. This BET method is widely used to measure the specific surface area of materials. Both Langmuir and BET models assume that the affinity of the gas for all adsorption sites are identical and so the calculated adsorbent uptake at the monolayer and equilibrium constant are independent of coverage and pressure. Accurate representations of adsorption data have been achieved by extending the Langmuir and BET models to include pressure-varying uptake capacities and equilibrium constants. These parameters are determined using a novel regression technique called flexible least squares for time-varying linear regression. For isothermal adsorption the adsorption parameters are assumed to vary slowly and smoothly with increasing pressure. The flexible least squares for pressure-varying linear regression (FLS-PVLR) approach assumes two distinct types of discrepancy terms, dynamic and measurement for all parameters in the linear equation used to simulate the data. Dynamic terms account for pressure variation in successive parameter vectors, and measurement terms account for differences between observed and theoretically predicted outcomes via linear regression. The resultant pressure-varying parameters are optimized by minimizing both dynamic and measurement residual squared errors. Validation of this methodology has been achieved by simulating adsorption data for n-butane and isobutane on activated carbon at 298 K, 323 K and 348 K and for nitrogen on mesoporous alumina at 77 K with pressure-varying Langmuir and BET adsorption parameters (equilibrium constants and uptake capacities). This modeling provides information on the adsorbent (accessible surface area and micropore volume), adsorbate (molecular areas and volumes) and thermodynamic (Gibbs free energies) variations of the adsorption sites.

Keywords: Langmuir adsorption isotherm, BET adsorption isotherm, pressure-varying adsorption parameters, adsorbate and adsorbent properties and energetics

Procedia PDF Downloads 217
17473 Diagnostics and Explanation of the Current Status of the 40- Year Railway Viaduct

Authors: Jakub Zembrzuski, Bartosz Sobczyk, Mikołaj MIśkiewicz

Abstract:

Besides designing new constructions, engineers all over the world must face another problem – maintenance, repairs, and assessment of the technical condition of existing bridges. To solve more complex issues, it is necessary to be familiar with the theory of finite element method and to have access to the software that provides sufficient tools which to enable create of sometimes significantly advanced numerical models. The paper includes a brief assessment of the technical condition, a description of the in situ non-destructive testing carried out and the FEM models created for global and local analysis. In situ testing was performed using strain gauges and displacement sensors. Numerical models were created using various software and numerical modeling techniques. Particularly noteworthy is the method of modeling riveted joints of the crossbeam of the viaduct. It is a simplified method that consists of the use of only basic numerical tools such as beam and shell finite elements, constraints, and simplified boundary conditions (fixed support and symmetry). The results of the numerical analyses were presented and discussed. It is clearly explained why the structure did not fail, despite the fact that the weld of the deck plate completely failed. A further research problem that was solved was to determine the cause of the rapid increase in values on the stress diagram in the cross-section of the transverse section. The problems were solved using the solely mentioned, simplified method of modeling riveted joints, which demonstrates that it is possible to solve such problems without access to sophisticated software that enables to performance of the advanced nonlinear analysis. Moreover, the obtained results are of great importance in the field of assessing the operation of bridge structures with an orthotropic plate.

Keywords: bridge, diagnostics, FEM simulations, failure, NDT, in situ testing

Procedia PDF Downloads 58
17472 An Empirical Study of Determinants Influencing Telemedicine Services Acceptance by Healthcare Professionals: Case of Selected Hospitals in Ghana

Authors: Jonathan Kissi, Baozhen Dai, Wisdom W. K. Pomegbe, Abdul-Basit Kassim

Abstract:

Protecting patient’s digital information is a growing concern for healthcare institutions as people nowadays perpetually live their lives through telemedicine services. These telemedicine services have been confronted with several determinants that hinder their successful implementations, especially in developing countries. Identifying such determinants that influence the acceptance of telemedicine services is also a problem for healthcare professionals. Despite the tremendous increase in telemedicine services, its adoption, and use has been quite slow in some healthcare settings. Generally, it is accepted in today’s globalizing world that the success of telemedicine services relies on users’ satisfaction. Satisfying health professionals and patients are one of the crucial objectives of telemedicine success. This study seeks to investigate the determinants that influence health professionals’ intention to utilize telemedicine services in clinical activities in a sub-Saharan African country in West Africa (Ghana). A hybridized model comprising of health adoption models, including technology acceptance theory, diffusion of innovation theory, and protection of motivation theory, were used to investigate these quandaries. The study was carried out in four government health institutions that apply and regulate telemedicine services in their clinical activities. A structured questionnaire was developed and used for data collection. Purposive and convenience sampling methods were used in the selection of healthcare professionals from different medical fields for the study. The collected data were analyzed based on structural equation modeling (SEM) approach. All selected constructs showed a significant relationship with health professional’s behavioral intention in the direction expected from prior literature including perceived usefulness, perceived ease of use, management strategies, financial sustainability, communication channels, patients security threat, patients privacy risk, self efficacy, actual service use, user satisfaction, and telemedicine services systems securities threat. Surprisingly, user characteristics and response efficacy of health professionals were not significant in the hybridized model. The findings and insights from this research show that health professionals are pragmatic when making choices for technology applications and also their willingness to use telemedicine services. They are, however, anxious about its threats and coping appraisals. The identified significant constructs in the study may help to increase efficiency, quality of services, quality patient care delivery, and satisfactory user satisfaction among healthcare professionals. The implantation and effective utilization of telemedicine services in the selected hospitals will aid as a strategy to eradicate hardships in healthcare services delivery. The service will help attain universal health access coverage to all populace. This study contributes to empirical knowledge by identifying the vital factors influencing health professionals’ behavioral intentions to adopt telemedicine services. The study will also help stakeholders of healthcare to formulate better policies towards telemedicine service usage.

Keywords: telemedicine service, perceived usefulness, perceived ease of use, management strategies, security threats

Procedia PDF Downloads 126
17471 Determining the Sources of Sediment at Different Areas of the Catchment: A Case Study of Welbedacht Reservoir, South Africa

Authors: D. T. Chabalala, J. M. Ndambuki, M. F. Ilunga

Abstract:

Sedimentation includes the processes of erosion, transportation, deposition, and the compaction of sediment. Sedimentation in reservoir results in a decrease in water storage capacity, downstream problems involving aggregation and degradation, blockage of the intake, and change in water quality. A study was conducted in Caledon River catchment in the upstream of Welbedacht Reservoir located in the South Eastern part of Free State province, South Africa. The aim of this research was to investigate and develop a model for an Integrated Catchment Modelling of Sedimentation processes and management for the Welbedacht reservoir. Revised Universal Soil Loss Equation (RUSLE) was applied to determine sources of sediment at different areas of the catchment. The model has been also used to determine the impact of changes from management practice on erosion generation. The results revealed that the main sources of sediment in the watershed are cultivated land (273 ton per hectare), built up and forest (103.3 ton per hectare), and grassland, degraded land, mining and quarry (3.9, 9.8 and 5.3 ton per hectare) respectively. After application of soil conservation practices to developed Revised Universal Soil Loss Equation model, the results revealed that the total average annual soil loss in the catchment decreased by 76% and sediment yield from cultivated land decreased by 75%, while the built up and forest area decreased by 42% and 99% respectively. Thus, results of this study will be used by government departments in order to develop sustainable policies.

Keywords: Welbedacht reservoir, sedimentation, RUSLE, Caledon River

Procedia PDF Downloads 182