Search results for: digital business models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11431

Search results for: digital business models

1681 Assessing Denitrification-Disintegration Model’s Efficacy in Simulating Greenhouse Gas Emissions, Crop Growth, Yield, and Soil Biochemical Processes in Moroccan Context

Authors: Mohamed Boullouz, Mohamed Louay Metougui

Abstract:

Accurate modeling of greenhouse gas (GHG) emissions, crop growth, soil productivity, and biochemical processes is crucial considering escalating global concerns about climate change and the urgent need to improve agricultural sustainability. The application of the denitrification-disintegration (DNDC) model in the context of Morocco's unique agro-climate is thoroughly investigated in this study. Our main research hypothesis is that the DNDC model offers an effective and powerful tool for precisely simulating a wide range of significant parameters, including greenhouse gas emissions, crop growth, yield potential, and complex soil biogeochemical processes, all consistent with the intricate features of environmental Moroccan agriculture. In order to verify these hypotheses, a vast amount of field data covering Morocco's various agricultural regions and encompassing a range of soil types, climatic factors, and crop varieties had to be gathered. These experimental data sets will serve as the foundation for careful model calibration and subsequent validation, ensuring the accuracy of simulation results. In conclusion, the prospective research findings add to the global conversation on climate-resilient agricultural practices while encouraging the promotion of sustainable agricultural models in Morocco. A policy architect's and an agricultural actor's ability to make informed decisions that not only advance food security but also environmental stability may be strengthened by the impending recognition of the DNDC model as a potent simulation tool tailored to Moroccan conditions.

Keywords: greenhouse gas emissions, DNDC model, sustainable agriculture, Moroccan cropping systems

Procedia PDF Downloads 58
1680 Synthetic Bis(2-Pyridylmethyl)Amino-Chloroacetyl Chloride- Ethylenediamine-Grafted Graphene Oxide Sheets Combined with Magnetic Nanoparticles: Remove Metal Ions and Catalytic Application

Authors: Laroussi Chaabane, Amel El Ghali, Emmanuel Beyou, Mohamed Hassen V. Baouab

Abstract:

In this research, the functionalization of graphene oxide sheets by ethylenediamine (EDA) was accomplished and followed by the grafting of bis(2-pyridylmethyl) amino group (BPED) onto the activated graphene oxide sheets in the presence of chloroacetylchloride (CAC) and then combined with magnetic nanoparticles (Fe₃O₄NPs) to produce a magnetic graphene-based composite [(Go-EDA-CAC)@Fe₃O₄NPs-BPED]. The physicochemical properties of [(Go-EDA-CAC)@Fe₃O₄NPs-BPED] composites were investigated by Fourier transform infrared (FT-IR), scanning electron microscopy (SEM), X-ray diffraction (XRD), thermogravimetric analysis (TGA). Additionally, the catalysts can be easily recycled within ten seconds by using an external magnetic field. Moreover, [(Go-EDA-CAC)@Fe₃O₄NPs-BPED] was used for removing Cu(II) ions from aqueous solutions using a batch process. The effect of pH, contact time and temperature on the metal ions adsorption were investigated, however weakly dependent on ionic strength. The maximum adsorption capacity values of Cu(II) on the [(Go-EDA-CAC)@Fe₃O₄NPs-BPED] at the pH of 6 is 3.46 mmol.g⁻¹. To examine the underlying mechanism of the adsorption process, pseudo-first, pseudo-second-order, and intraparticle diffusion models were fitted to experimental kinetic data. Results showed that the pseudo-second-order equation was appropriate to describe the Cu (II) adsorption by [(Go-EDA-CAC)@Fe₃O₄NPs-BPED]. Adsorption data were further analyzed by the Langmuir, Freundlich, and Jossens adsorption approaches. Additionally, the adsorption properties of the [(Go-EDA-CAC)@Fe₃O₄NPs-BPED], their reusability (more than 6 cycles) and durability in the aqueous solutions open the path to removal of Cu(II) from water solution. Based on the results obtained, we report the activity of Cu(II) supported on [(Go-EDA-CAC)@Fe₃O₄NPs-BPED] as a catalyst for the cross-coupling of symmetric alkynes.

Keywords: graphene, magnetic nanoparticles, adsorption kinetics/isotherms, cross coupling

Procedia PDF Downloads 131
1679 Religiosity and Involvement in Purchasing Convenience Foods: Using Two-Step Cluster Analysis to Identify Heterogenous Muslim Consumers in the UK

Authors: Aisha Ijaz

Abstract:

The paper focuses on the impact of Muslim religiosity on convenience food purchases and involvement experienced in a non-Muslim culture. There is a scarcity of research on the purchasing patterns of Muslim diaspora communities residing in risk societies, particularly in contexts where there is an increasing inclination toward industrialized food items alongside a renewed interest in the concept of natural foods. The United Kingdom serves as an appropriate setting for this study due to the increasing Muslim population in the country, paralleled by the expanding Halal Food Market. A multi-dimensional framework is proposed, testing for five forms of involvement, specifically Purchase Decision Involvement, Product Involvement, Behavioural Involvement, Intrinsic Risk and Extrinsic Risk. Quantitative cross-sectional consumer data were collected through a face-to-face survey contact method with 141 Muslims during the summer of 2020 in Liverpool located in the Northwest of England. proportion formula was utilitsed, and the population of interest was stratified by gender and age before recruitment took place through local mosques and community centers. Six input variables were used (intrinsic religiosity and involvement dimensions), dividing the sample into 4 clusters using the Two-Step Cluster Analysis procedure in SPSS. Nuanced variances were observed in the type of involvement experienced by religiosity group, which influences behaviour when purchasing convenience food. Four distinct market segments were identified: highly religious ego-involving (39.7%), less religious active (26.2%), highly religious unaware (16.3%), less religious concerned (17.7%). These segments differ significantly with respects to their involvement, behavioural variables (place of purchase and information sources used), socio-cultural (acculturation and social class), and individual characteristics. Choosing the appropriate convenience food is centrally related to the value system of highly religious ego-involving first-generation Muslims, which explains their preference for shopping at ethnic food stores. Less religious active consumers are older and highly alert in information processing to make the optimal food choice, relying heavily on product label sources. Highly religious unaware Muslims are less dietary acculturated to the UK diet and tend to rely on digital and expert advice sources. The less-religious concerned segment, who are typified by younger age and third generation, are engaged with the purchase process because they are worried about making unsuitable food choices. Research implications are outlined and potential avenues for further explorations are identified.

Keywords: consumer behaviour, consumption, convenience food, religion, muslims, UK

Procedia PDF Downloads 49
1678 Spectral Mixture Model Applied to Cannabis Parcel Determination

Authors: Levent Basayigit, Sinan Demir, Yusuf Ucar, Burhan Kara

Abstract:

Many research projects require accurate delineation of the different land cover type of the agricultural area. Especially it is critically important for the definition of specific plants like cannabis. However, the complexity of vegetation stands structure, abundant vegetation species, and the smooth transition between different seconder section stages make vegetation classification difficult when using traditional approaches such as the maximum likelihood classifier. Most of the time, classification distinguishes only between trees/annual or grain. It has been difficult to accurately determine the cannabis mixed with other plants. In this paper, a mixed distribution models approach is applied to classify pure and mix cannabis parcels using Worldview-2 imagery in the Lakes region of Turkey. Five different land use types (i.e. sunflower, maize, bare soil, and cannabis) were identified in the image. A constrained Gaussian mixture discriminant analysis (GMDA) was used to unmix the image. In the study, 255 reflectance ratios derived from spectral signatures of seven bands (Blue-Green-Yellow-Red-Rededge-NIR1-NIR2) were randomly arranged as 80% for training and 20% for test data. Gaussian mixed distribution model approach is proved to be an effective and convenient way to combine very high spatial resolution imagery for distinguishing cannabis vegetation. Based on the overall accuracies of the classification, the Gaussian mixed distribution model was found to be very successful to achieve image classification tasks. This approach is sensitive to capture the illegal cannabis planting areas in the large plain. This approach can also be used for monitoring and determination with spectral reflections in illegal cannabis planting areas.

Keywords: Gaussian mixture discriminant analysis, spectral mixture model, Worldview-2, land parcels

Procedia PDF Downloads 191
1677 An Analysis of Socio-Demographics, Living Conditions, and Physical and Emotional Child Abuse Patterns in the Context of the 2010 Haiti Earthquake

Authors: Sony Subedi, Colleen Davison, Susan Bartels

Abstract:

Objective: The aim of this study is to i) investigate the socio-demographics and living conditions of households in Haiti pre- and post 2010 earthquake, ii) determine the household prevalence of emotional and physical abuse in children (aged 2-14) after the earthquake, and iii) explore the association between earthquake-related loss and experience of emotional and physical child abuse in the household while considering potential confounding variables and the interactive effects of a number of social, economic, and demographic factors. Methods: A nationally representative sample of Haitian households from the 2005/6 and 2012 phases of the Demographic and Health Surveys (DHS) was used. Descriptive analysis was summarized using frequencies and measures of central tendency. Chi-squared and independent t-tests were used to compare data that was available pre-earthquake and post-earthquake. The association between experiences of earthquake-related loss and emotional and physical child abuse was assessed using log-binomial regression models. Results: Comparing pre-post-earthquake, noteworthy improvements were observed in the educational attainment of the household head (9.1% decrease in “no education” category) and in possession of the following household items: electricity, television, mobile-phone, and radio post-earthquake. Approximately 77.0% of children aged 2-14 experienced at least one form of physical abuse and 78.5% of children experienced at least one form of emotional abuse one month prior to the 2012 survey period. Analysis regarding the third objective (association between experiences of earthquake-related loss and emotional and physical child abuse) is in progress. Conclusions: The extremely high prevalence of emotional and physical child abuse in Haiti indicates an immediate need for improvements in the enforcement of existing policies and interventions aimed at decreasing child abuse in the household.

Keywords: Haiti earthquake, physical abuse, emotional abuse, natural disasters, children

Procedia PDF Downloads 171
1676 Exploring Individual Decision Making Processes and the Role of Information Structure in Promoting Uptake of Energy Efficient Technologies

Authors: Rebecca J. Hafner, Daniel Read, David Elmes

Abstract:

The current research applies decision making theory in order to address the problem of increasing uptake of energy-efficient technologies in the market place, where uptake is currently slower than one might predict following rational choice models. Specifically, in two studies we apply the alignable/non-alignable features effect and explore the impact of varying information structure on the consumers’ preference for standard versus energy efficient technologies. As researchers in the Interdisciplinary centre for Storage, Transformation and Upgrading of Thermal Energy (i-STUTE) are currently developing energy efficient heating systems for homes and businesses, we focus on the context of home heating choice, and compare preference for a standard condensing boiler versus an energy efficient heat pump, according to experimental manipulations in the structure of prior information. In Study 1, we find that people prefer stronger alignable features when options are similar; an effect which is mediated by an increased tendency to infer missing information is the same. Yet, in contrast to previous research, we find no effects of alignability on option preference when options differ. The advanced methodological approach used here, which is the first study of its kind to randomly allocate features as either alignable or non-alignable, highlights potential design effects in previous work. Study 2 is designed to explore the interaction between alignability and construal level as an explanation for the shift in attentional focus when options differ. Theoretical and applied implications for promoting energy efficient technologies are discussed.

Keywords: energy-efficient technologies, decision-making, alignability effects, construal level theory, CO2 reduction

Procedia PDF Downloads 326
1675 BFDD-S: Big Data Framework to Detect and Mitigate DDoS Attack in SDN Network

Authors: Amirreza Fazely Hamedani, Muzzamil Aziz, Philipp Wieder, Ramin Yahyapour

Abstract:

Software-defined networking in recent years came into the sight of so many network designers as a successor to the traditional networking. Unlike traditional networks where control and data planes engage together within a single device in the network infrastructure such as switches and routers, the two planes are kept separated in software-defined networks (SDNs). All critical decisions about packet routing are made on the network controller, and the data level devices forward the packets based on these decisions. This type of network is vulnerable to DDoS attacks, degrading the overall functioning and performance of the network by continuously injecting the fake flows into it. This increases substantial burden on the controller side, and the result ultimately leads to the inaccessibility of the controller and the lack of network service to the legitimate users. Thus, the protection of this novel network architecture against denial of service attacks is essential. In the world of cybersecurity, attacks and new threats emerge every day. It is essential to have tools capable of managing and analyzing all this new information to detect possible attacks in real-time. These tools should provide a comprehensive solution to automatically detect, predict and prevent abnormalities in the network. Big data encompasses a wide range of studies, but it mainly refers to the massive amounts of structured and unstructured data that organizations deal with on a regular basis. On the other hand, it regards not only the volume of the data; but also that how data-driven information can be used to enhance decision-making processes, security, and the overall efficiency of a business. This paper presents an intelligent big data framework as a solution to handle illegitimate traffic burden on the SDN network created by the numerous DDoS attacks. The framework entails an efficient defence and monitoring mechanism against DDoS attacks by employing the state of the art machine learning techniques.

Keywords: apache spark, apache kafka, big data, DDoS attack, machine learning, SDN network

Procedia PDF Downloads 164
1674 Engineering Thermal-Hydraulic Simulator Based on Complex Simulation Suite “Virtual Unit of Nuclear Power Plant”

Authors: Evgeny Obraztsov, Ilya Kremnev, Vitaly Sokolov, Maksim Gavrilov, Evgeny Tretyakov, Vladimir Kukhtevich, Vladimir Bezlepkin

Abstract:

Over the last decade, a specific set of connected software tools and calculation codes has been gradually developed. It allows simulating I&C systems, thermal-hydraulic, neutron-physical and electrical processes in elements and systems at the Unit of NPP (initially with WWER (pressurized water reactor)). In 2012 it was called a complex simulation suite “Virtual Unit of NPP” (or CSS “VEB” for short). Proper application of this complex tool should result in a complex coupled mathematical computational model. And for a specific design of NPP, it is called the Virtual Power Unit (or VPU for short). VPU can be used for comprehensive modelling of a power unit operation, checking operator's functions on a virtual main control room, and modelling complicated scenarios for normal modes and accidents. In addition, CSS “VEB” contains a combination of thermal hydraulic codes: the best-estimate (two-liquid) calculation codes KORSAR and CORTES and a homogenous calculation code TPP. So to analyze a specific technological system one can build thermal-hydraulic simulation models with different detalization levels up to a nodalization scheme with real geometry. And the result at some points is similar to the notion “engineering/testing simulator” described by the European utility requirements (EUR) for LWR nuclear power plants. The paper is dedicated to description of the tools mentioned above and an example of the application of the engineering thermal-hydraulic simulator in analysis of the boron acid concentration in the primary coolant (changed by the make-up and boron control system).

Keywords: best-estimate code, complex simulation suite, engineering simulator, power plant, thermal hydraulic, VEB, virtual power unit

Procedia PDF Downloads 373
1673 Flow Field Analysis of Different Intake Bump (Compression Surface) Configurations on a Supersonic Aircraft

Authors: Mudassir Ghafoor, Irsalan Arif, Shuaib Salamat

Abstract:

This paper presents modeling and analysis of different intake bump (compression surface) configurations and comparison with an existing supersonic aircraft having bump intake configuration. Many successful aircraft models have shown that Diverter less Supersonic Inlet (DSI) as compared to conventional intake can reduce weight, complexity and also maintenance cost. The research is divided into two parts. In the first part, four different intake bumps are modeled for comparative analysis keeping in view the consistency of outer perimeter dimensions of fighter aircraft and various characteristics such as flow behavior, boundary layer diversion and pressure recovery are analyzed. In the second part, modeled bumps are integrated with intake duct for performance analysis and comparison with existing supersonic aircraft data is carried out. The bumps are named as uniform large (Config 1), uniform small (Config 2), uniform sharp (Config 3), non-uniform (Config 4) based on their geometric features. Analysis is carried out at different Mach Numbers to analyze flow behavior in subsonic and supersonic regime. Flow behavior, boundary layer diversion and Pressure recovery are examined for each bump characteristics, and comparative study is carried out. The analysis reveals that at subsonic speed, Config 1 and Config 2 give similar pressure recoveries as diverterless supersonic intake, but difference in pressure recoveries becomes significant at supersonic speed. It was concluded from research that Config 1 gives better results as compared to Config 3. Also, higher amplitude (Config 1) is preferred over lower (Config 2 and 4). It was observed that maximum height of bump is preferred to be placed near cowl lip of intake duct.

Keywords: bump intake, boundary layer, computational fluid dynamics, diverter-less supersonic inlet

Procedia PDF Downloads 239
1672 Climbing up to Safety and Security: The Facilitation of an NGO Awareness Culture

Authors: Mirad Böhm, Diede De Kok

Abstract:

It goes without saying that for many NGOs a high level of safety and security are crucial issues, which often necessitates the support of military personnel to varying degrees. The relationship between military and NGO personnel is usually a difficult one and while there has been progress, clashes naturally still occur owing to different interpretations of mission objectives amongst many other challenges. NGOs tend to view safety and security as necessary steps towards their goal instead of fundamental pillars of their core ‘business’. The military perspective, however, considers them primary objectives; thus, frequently creating a different vision of how joint operations should be conducted. This paper will argue that internalizing safety and security into the NGO organizational culture is compelling in order to ensure a more effective cooperation with military partners and, ultimately, to achieve their goals. This can be accomplished through a change in perception of safety and security concepts as a fixed and major point on the everyday agenda. Nowadays, there are several training programmes on offer addressing such issues but they primarily focus on the individual level. True internalization of these concepts should reach further by encompassing a wide range of NGO activities, beginning with daily proceedings in office facilities far from conflict zones including logistical and administrative tasks such as budgeting, and leading all the way to actual and potentially hazardous missions in the field. In order to effectuate this change, a tool is required to help NGOs realize, firstly, how they perceive and define safety and security, and secondly, how they can adjust this perception to their benefit. The ‘safety culture ladder’ is a concept that suggests what organizations can and should do to advance their safety. While usually applied to private industrial scenarios, this work will present the concept as a useful instrument to visualize and facilitate the internalization process NGOs ought to go through. The ‘ladder’ allows them to become more aware of the level of their safety and security measures, and moreover, cautions them to take these measures proactively rather than reactively. This in turn will contribute to a rapprochement between military and NGO priority setting in regard to what constitutes a safe working environment.

Keywords: NGO-military cooperation, organisational culture, safety and security awareness, safety culture ladder

Procedia PDF Downloads 323
1671 Study of a Lean Premixed Combustor: A Thermo Acoustic Analysis

Authors: Minoo Ghasemzadeh, Rouzbeh Riazi, Shidvash Vakilipour, Alireza Ramezani

Abstract:

In this study, thermo acoustic oscillations of a lean premixed combustor has been investigated, and a mono-dimensional code was developed in this regard. The linearized equations of motion are solved for perturbations with time dependence〖 e〗^iwt. Two flame models were considered in this paper and the effect of mean flow and boundary conditions were also investigated. After manipulation of flame heat release equation together with the equations of flow perturbation within the main components of the combustor model (i.e., plenum/ premixed duct/ and combustion chamber) and by considering proper boundary conditions between the components of model, a system of eight homogeneous equations can be obtained. This simplification, for the main components of the combustor model, is convenient since low frequency acoustic waves are not affected by bends. Moreover, some elements in the combustor are smaller than the wavelength of propagated acoustic perturbations. A convection time is also assumed to characterize the required time for the acoustic velocity fluctuations to travel from the point of injection to the location of flame front in the combustion chamber. The influence of an extended flame model on the acoustic frequencies of combustor was also investigated, assuming the effect of flame speed as a function of equivalence ratio perturbation, on the rate of flame heat release. The abovementioned system of equations has a related eigenvalue equation which has complex roots. The sign of imaginary part of these roots determines whether the disturbances grow or decay and the real part of these roots would give the frequency of the modes. The results show a reasonable agreement between the predicted values of dominant frequencies in the present model and those calculated in previous related studies.

Keywords: combustion instability, dominant frequencies, flame speed, premixed combustor

Procedia PDF Downloads 373
1670 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods

Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard

Abstract:

The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.

Keywords: algorithms, genetics, matching, population

Procedia PDF Downloads 136
1669 The Relationship Between Cyberbullying Victimization, Parent and Peer Attachment and Unconditional Self-Acceptance

Authors: Florina Magdalena Anichitoae, Anca Dobrean, Ionut Stelian Florean

Abstract:

Due to the fact that cyberbullying victimization is an increasing problem nowadays, affecting more and more children and adolescents around the world, we wanted to take a step forward analyzing this phenomenon. So, we took a look at some variables which haven't been studied together before, trying to develop another way to view cyberbullying victimization. We wanted to test the effects of the mother, father, and peer attachment on adolescent involvement in cyberbullying as victims through unconditional self acceptance. Furthermore, we analyzed each subscale of the IPPA-R, the instrument we have used for parents and peer attachment measurement, in regards to cyberbullying victimization through unconditional self acceptance. We have also analyzed if gender and age could be taken into consideration as moderators in this model. The analysis has been performed on 653 adolescents aged 11-17 years old from Romania. We used structural equation modeling, working in R program. For the fidelity analysis of the IPPA-R subscales, USAQ, and Cyberbullying Test, we have calculated the internal consistency index, which varies between .68-.91. We have created 2 models: the first model including peer alienation, peer trust, peer communication, self acceptance and cyberbullying victimization, having CFI=0.97, RMSEA=0.02, 90%CI [0.02, 0.03] and SRMR=0.07, and the second model including parental alienation, parental trust, parental communication, self acceptance and cyberbullying victimization and had CFI=0.97, RMSEA=0.02, 90%CI [0.02, 0.03] and SRMR=0.07. Our results were interesting: on one hand, cyberbullying victimization is predicted by peer alienation and peer communication through unconditional self acceptance. Peer trust directly, significantly, and negatively predicted the implication in cyberbullying. In this regard, considering gender and age as moderators, we found that the relationship between unconditional self acceptance and cyberbullying victimization is stronger in girls, but age does not moderate the relationship between unconditional self acceptance and cyberbullying victimization. On the other hand, regarding the degree of cyberbullying victimization as being predicted through unconditional self acceptance by parental alienation, parental communication, and parental trust, this hypothesis was not supported. Still, we could identify a direct path to positively predict victimization through parental alienation and negatively through parental trust. There are also some limitations to this study, which we've discussed in the end.

Keywords: adolescent, attachment, cyberbullying victimization, parents, peers, unconditional self-acceptance

Procedia PDF Downloads 202
1668 Exploring Gender-Base Salary Disparities and Equities Among University Presidents

Authors: Daniel Barkley, Jianyi Zhu

Abstract:

This study investigates base salary differentials and gender equity among university presidents across 427 U.S. colleges and universities. While endowments typically do not directly determine university presidents' base salaries, our analysis reveals a noteworthy pattern: endowments explain more than half of the variance in female university presidents' base salaries, compared to a mere 0.69 percent for males. Moreover, female presidents' base salaries tend to rise much faster than male base salaries with increasing university endowments. This disparate impact of endowments on base salaries implies an endowment threshold for achieving gender pay equity. We develop an analytical model predicting an endowment threshold for achieving gender equality and empirically estimate this equity threshold using data from over 427 institutions. Surprisingly, the fields of science and athletics have emerged as sources of gender-neutral base pay. Both male and female university presidents with STEM backgrounds command higher base salaries than those without such qualifications. Additionally, presidents of universities affiliated with Power 5 conferences consistently receive higher base salaries regardless of gender. Consistent with the theory of human capital accumulation, the duration of the university presidency incrementally raises base salaries for both genders but at a diminishing rate. Curiously, prior administrative leadership experience as a vice president, provost, dean, or department chair does not significantly influence base salaries for either gender. By providing empirical evidence and analytical models predicting an endowment threshold for achieving gender equality in base salaries, the study offers valuable insights for policymakers, university administrators, and other stakeholders. These findings hold crucial policy implications, informing strategies to promote gender equality in executive compensation within higher education institutions.

Keywords: higher education, endowments, base salaries, university presidents

Procedia PDF Downloads 45
1667 Integration of Big Data to Predict Transportation for Smart Cities

Authors: Sun-Young Jang, Sung-Ah Kim, Dongyoun Shin

Abstract:

The Intelligent transportation system is essential to build smarter cities. Machine learning based transportation prediction could be highly promising approach by delivering invisible aspect visible. In this context, this research aims to make a prototype model that predicts transportation network by using big data and machine learning technology. In detail, among urban transportation systems this research chooses bus system.  The research problem that existing headway model cannot response dynamic transportation conditions. Thus, bus delay problem is often occurred. To overcome this problem, a prediction model is presented to fine patterns of bus delay by using a machine learning implementing the following data sets; traffics, weathers, and bus statues. This research presents a flexible headway model to predict bus delay and analyze the result. The prototyping model is composed by real-time data of buses. The data are gathered through public data portals and real time Application Program Interface (API) by the government. These data are fundamental resources to organize interval pattern models of bus operations as traffic environment factors (road speeds, station conditions, weathers, and bus information of operating in real-time). The prototyping model is designed by the machine learning tool (RapidMiner Studio) and conducted tests for bus delays prediction. This research presents experiments to increase prediction accuracy for bus headway by analyzing the urban big data. The big data analysis is important to predict the future and to find correlations by processing huge amount of data. Therefore, based on the analysis method, this research represents an effective use of the machine learning and urban big data to understand urban dynamics.

Keywords: big data, machine learning, smart city, social cost, transportation network

Procedia PDF Downloads 250
1666 Vehicle Activity Characterization Approach to Quantify On-Road Mobile Source Emissions

Authors: Hatem Abou-Senna, Essam Radwan

Abstract:

Transportation agencies and researchers in the past have estimated emissions using one average speed and volume on a long stretch of roadway. Other methods provided better accuracy utilizing annual average estimates. Travel demand models provided an intermediate level of detail through average daily volumes. Currently, higher accuracy can be established utilizing microscopic analyses by splitting the network links into sub-links and utilizing second-by-second trajectories to calculate emissions. The need to accurately quantify transportation-related emissions from vehicles is essential. This paper presents an examination of four different approaches to capture the environmental impacts of vehicular operations on a 10-mile stretch of Interstate 4 (I-4), an urban limited access highway in Orlando, Florida. First, (at the most basic level), emissions were estimated for the entire 10-mile section 'by hand' using one average traffic volume and average speed. Then, three advanced levels of detail were studied using VISSIM/MOVES to analyze smaller links: average speeds and volumes (AVG), second-by-second link drive schedules (LDS), and second-by-second operating mode distributions (OPMODE). This paper analyzes how the various approaches affect predicted emissions of CO, NOx, PM2.5, PM10, and CO2. The results demonstrate that obtaining precise and comprehensive operating mode distributions on a second-by-second basis provides more accurate emission estimates. Specifically, emission rates are highly sensitive to stop-and-go traffic and the associated driving cycles of acceleration, deceleration, and idling. Using the AVG or LDS approach may overestimate or underestimate emissions, respectively, compared to an operating mode distribution approach.

Keywords: limited access highways, MOVES, operating mode distribution (OPMODE), transportation emissions, vehicle specific power (VSP)

Procedia PDF Downloads 336
1665 Embedded Semantic Segmentation Network Optimized for Matrix Multiplication Accelerator

Authors: Jaeyoung Lee

Abstract:

Autonomous driving systems require high reliability to provide people with a safe and comfortable driving experience. However, despite the development of a number of vehicle sensors, it is difficult to always provide high perceived performance in driving environments that vary from time to season. The image segmentation method using deep learning, which has recently evolved rapidly, provides high recognition performance in various road environments stably. However, since the system controls a vehicle in real time, a highly complex deep learning network cannot be used due to time and memory constraints. Moreover, efficient networks are optimized for GPU environments, which degrade performance in embedded processor environments equipped simple hardware accelerators. In this paper, a semantic segmentation network, matrix multiplication accelerator network (MMANet), optimized for matrix multiplication accelerator (MMA) on Texas instrument digital signal processors (TI DSP) is proposed to improve the recognition performance of autonomous driving system. The proposed method is designed to maximize the number of layers that can be performed in a limited time to provide reliable driving environment information in real time. First, the number of channels in the activation map is fixed to fit the structure of MMA. By increasing the number of parallel branches, the lack of information caused by fixing the number of channels is resolved. Second, an efficient convolution is selected depending on the size of the activation. Since MMA is a fixed, it may be more efficient for normal convolution than depthwise separable convolution depending on memory access overhead. Thus, a convolution type is decided according to output stride to increase network depth. In addition, memory access time is minimized by processing operations only in L3 cache. Lastly, reliable contexts are extracted using the extended atrous spatial pyramid pooling (ASPP). The suggested method gets stable features from an extended path by increasing the kernel size and accessing consecutive data. In addition, it consists of two ASPPs to obtain high quality contexts using the restored shape without global average pooling paths since the layer uses MMA as a simple adder. To verify the proposed method, an experiment is conducted using perfsim, a timing simulator, and the Cityscapes validation sets. The proposed network can process an image with 640 x 480 resolution for 6.67 ms, so six cameras can be used to identify the surroundings of the vehicle as 20 frame per second (FPS). In addition, it achieves 73.1% mean intersection over union (mIoU) which is the highest recognition rate among embedded networks on the Cityscapes validation set.

Keywords: edge network, embedded network, MMA, matrix multiplication accelerator, semantic segmentation network

Procedia PDF Downloads 121
1664 The Role of Middle Managers SBU's in Context of Change: Sense-Making Approach

Authors: Hala Alioua, Alberic Tellier

Abstract:

This paper is designed to spotlight the research on corporate strategic planning, by emphasizing the role of middle manager of SBU’s and related issues such as the context of vision change. Previous research on strategic vision has been focused principally at the SME, with relatively limited consideration given to the role of middle managers SBU’s in the context of change. This project of research has been done by using a single case study. We formulated through our immersion for 2.5 years on the ground and by a qualitative method and abduction approach. This entity that we analyze is a subsidiary of multinational companies headquartered in Germany, specialized in manufacturing automotive equipment. The "Delta Company" is a French manufacturing plant that has undergone numerous changes over the past three years. The two major strategic changes that have a significant impact on the Delta plant are the strengths of its core business through « lead plant strategy» in 2011 and the implementation of a new strategic vision in 2014. These consecutive changes impact the purpose of the mission of the middle managers. The plant managers ask the following questions: How the middle managers make sense of the corporate strategic planning imposed by the parent company? How they appropriate the new vision and decline it into actions on the ground? We chose the individual interview technique through open-ended questions as the source of data collection. We first of all carried out an exploratory approach by interviewing 8 members of the Management committee’s decision and 19 heads of services. The first findings and results show that exist a divergence of opinion and interpretations of the corporate strategic planning among organization members and there are difficulties to make sense and interpretations of the signals of the environment. The lead plant strategy enables new projects which insure the workload of Delta Company. Nevertheless, it creates a tension and stress among the middle managers because its provoke lack of resources to the detriment of their main jobs as manufacturer plant. The middle managers does not have a clear vision and they are wondering if the new strategic vision means more autonomy and less support from the group.

Keywords: change, middle managers, vision, sensemaking

Procedia PDF Downloads 395
1663 Moving beyond the Social Model of Disability by Engaging in Anti-Oppressive Social Work Practice

Authors: Irene Carter, Roy Hanes, Judy MacDonald

Abstract:

Considering that disability is universal and people with disabilities are part of all societies; that there is a connection between the disabled individual and the societal; and that it is society and social arrangements that disable people with impairments, contemporary disability discourse emphasizes the social model of disability to counter medical and rehabilitative models of disability. However, the social model does not go far enough in addressing the issues of oppression and inclusion. The authors indicate that the social model does not specifically or adequately denote the oppression of persons with disabilities, which is a central component of progressive social work practice with people with disabilities. The social model of disability does not go far enough in deconstructing disability and offering social workers, as well as people with disabilities a way of moving forward in terms of practice anchored in individual, familial and societal change. The social model of disability is expanded by incorporating principles of anti-oppression social work practice. Although the contextual analysis of the social model of disability is an important component there remains a need for social workers to provide service to individuals and their families, which will be illustrated through anti-oppressive practice (AOP). By applying an anti-oppressive model of practice to the above definitions, the authors not only deconstruct disability paradigms but illustrate how AOP offers a framework for social workers to engage with people with disabilities at the individual, familial and community levels of practice, promoting an emancipatory focus in working with people with disabilities. An anti- social- oppression social work model of disability connects the day-to-day hardships of people with disabilities to the direct consequence of oppression in the form of ableism. AOP theory finds many of its basic concepts within social-oppression theory and the social model of disability. It is often the case that practitioners, including social workers and psychologists, define people with disabilities’ as having or being a problem with the focus placed upon adjustment and coping. A case example will be used to illustrate how an AOP paradigm offers social work a more comprehensive and critical analysis and practice model for social work practice with and for people with disabilities than the traditional medical model, rehabilitative and social model approaches.

Keywords: anti-oppressive practice, disability, people with disabilities, social model of disability

Procedia PDF Downloads 1062
1662 Investigation of the Material Behaviour of Polymeric Interlayers in Broken Laminated Glass

Authors: Martin Botz, Michael Kraus, Geralt Siebert

Abstract:

The use of laminated glass gains increasing importance in structural engineering. For safety reasons, at least two glass panes are laminated together with a polymeric interlayer. In case of breakage of one or all of the glass panes, the glass fragments are still connected to the interlayer due to adhesion forces and a certain residual load-bearing capacity is left in the system. Polymer interlayers used in the laminated glass show a viscoelastic material behavior, e.g. stresses and strains in the interlayer are dependent on load duration and temperature. In the intact stage only small strains appear in the interlayer, thus the material can be described in a linear way. In the broken stage, large strains can appear and a non-linear viscoelasticity material theory is necessary. Relaxation tests on two different types of polymeric interlayers are performed at different temperatures and strain amplitudes to determine the border to the non-linear material regime. Based on the small-scale specimen results further tests on broken laminated glass panes are conducted. So-called ‘through-crack-bending’ (TCB) tests are performed, in which the laminated glass has a defined crack pattern. The test set-up is realized in a way that one glass layer is still able to transfer compressive stresses but tensile stresses have to be transferred by the interlayer solely. The TCB-tests are also conducted under different temperatures but constant force (creep test). Aims of these experiments are to elaborate if the results of small-scale tests on the interlayer are transferable to a laminated glass system in the broken stage. In this study, limits of the applicability of linear-viscoelasticity are established in the context of two commercially available polymer-interlayers. Furthermore, it is shown that the results of small-scale tests agree to a certain degree to the results of the TCB large-scale experiments. In a future step, the results can be used to develop material models for the post breakage performance of laminated glass.

Keywords: glass breakage, laminated glass, relaxation test, viscoelasticity

Procedia PDF Downloads 118
1661 Natural Bio-Active Product from Marine Resources

Authors: S. Ahmed John

Abstract:

Marine forms-bacteria, actinobacteria, cynobacteria, fungi, microalgae, seaweeds mangroves and other halophytes an extremely important oceanic resources and constituting over 90% of the oceanic biomass. The marine natural products have lead to the discovery of many compounds considered worthy for clinical applications. The marine sources have the highest probability of yielding natural products. Natural derivatives play an important role to prevent the cancer incidences as synthetic drug transformation in mangrove. 28.12% of anticancer compound extracted from the mangroves. Exchocaria agollocha has the anti cancer compounds. The present investigation reveals the potential of the Exchocaria agollocha with biotechnological applications for anti cancer, antimicrobial drug discovery, environmental remediation, and developing new resources for the industrial process. The anti-cancer activity of Exchocaria agollocha was screened from 3.906 to 1000 µg/ml of concentration with the dilution leads to 1:1 to 1:128 following methanol and chloroform extracts. The cell viability in the Exchocaria agollocha was maximum at the lower concentration where as low at the higher concentration of methanol and chloroform extracts when compare to control. At 3.906 concentration, 85.32 and 81.96 of cell viability was found at 1:128 dilution of methanol and chloroform extracts respectively. At the concentration of 31.25 following 1:16 dilution, the cell viability was 65.55 in methanol and 45.55 in chloroform extracts. However, at the higher concentration, the cell viability 22.35 and 8.12 was recorded in the extracts of methanol and chloroform. The cell viability was more in methanol when compare to chloroform extracts at lower concentration. The present findings gives current trends in screening and the activity analysis of metabolites from mangrove resources and to expose the models to bring a new sustain for tackling cancer. Bioactive compounds of Exchocaria agollocha have extensive use in treatment of many diseases and serve as a compound and templates for synthetic modification.

Keywords: bio-active product, compounds, natural products and microalgae

Procedia PDF Downloads 241
1660 Climate Variability and Its Impacts on Rice (Oryza sativa) Productivity in Dass Local Government Area of Bauchi State, Nigeria

Authors: Auwal Garba, Rabiu Maijama’a, Abdullahi Muhammad Jalam

Abstract:

Variability in climate has affected the agricultural production all over the globe. This concern has motivated important changes in the field of research during the last decade. Climate variability is believed to have declining effects towards rice production in Nigeria. This study examined climate variability and its impact on rice productivity in Dass Local Government Area, Bauchi State, by employing Linear Trend Model (LTM), analysis of variance (ANOVA) and regression analysis. Annual seasonal data of the climatic variables for temperature (min. and max), rainfall, and solar radiation from 1990 to 2015 were used. Results confirmed that 74.4% of the total variation in rice yield in the study area was explained by the changes in the independent variables. That is to say, temperature (minimum and maximum), rainfall, and solar radiation explained rice yield with 74.4% in the study area. Rising mean maximum temperature would lead to reduction in rice production while moderate increase in mean minimum temperature would be advantageous towards rice production, and the persistent rise in the mean maximum temperature, in the long run, will have more negatively affect rice production in the future. It is, therefore, important to promote agro-meteorological advisory services, which will be useful in farm planning and yield sustainability. Closer collaboration among the meteorologist and agricultural scientist is needed to increase the awareness about the existing database, crop weather models among others, with a view to reaping the full benefits of research on specific problems and sustainable yield management and also there should be a special initiative by the ADPs (State Agricultural Development Programme) towards promoting best agricultural practices that are resilient to climate variability in rice production and yield sustainability.

Keywords: climate variability, impact, productivity, rice

Procedia PDF Downloads 96
1659 Translation of the Bible into the Yoruba Language: A Functionalist Approach in Resolving Cultural Problems

Authors: Ifeoluwa Omotehinse Oloruntoba

Abstract:

Through comparative and causal models of translation, this paper examined the translation of ‘bread’ into the Yoruba language in three Yoruba versions of the Bible: Bibeli Yoruba Atoka (YBA), Bibeli Mimo ni Ede Yoruba Oni (BMY) and Bibeli Mimo (BM). In biblical times, bread was a very important delicacy that it was synonymous with food in general and in the Bible, bread sometimes refers to a type of food (a mixture of flour, water, and yeast that is baked) or food in general. However, this is not the case in the Yoruba culture. In fact, some decades ago, bread was not known in Nigeria and had no name in the Yoruba language until the 1900s when it was codified as burẹdi in Yoruba, a term borrowed from English and transliterated. Nevertheless, in Nigeria presently, bread is not a special food and it is not appreciated or consumed like in the West. This makes it difficult to translate bread in the Bible into Yoruba. From an investigation on the translation of this term, it was discovered that bread which has 330 occurrences in the English Bible translation (King James) has few occurrences in the three Yoruba Bible versions. In the first version (YBA) published in the 1880s, where bread is synonymous with food in general, it is mostly translated as oúnjẹ (food) or the verb jẹ (to eat), revealing that something is eaten but not indicating what it is. However, when the bread is a type of food, it is rendered as akara, a special delicacy of the Yoruba people made from beans flour. In the later version (BMY) published in the 1990s, bread as food, in general, is also mainly translated as oúnjẹ or the verb jẹ, but when it is a type of food, it is translated as akara with few occurrences of burẹdi. In the latest edition (BM), bread as food is either rendered as ounje or literally translated as burẹdi. Where it is a type of food in this version, it is mainly rendered as burẹdi with few occurrences of akara, indicating the assimilation of bread into the Yoruba culture. This result, although limited, shows that the Bible was translated into Yoruba to make it accessible to Yoruba speakers in their everyday language, hence the application of both domesticating and foreignising strategies. This research also emphasizes the role of the translator as an intermediary between two cultures.

Keywords: translation, Bible, Yoruba, cultural problems

Procedia PDF Downloads 266
1658 Virtual Experiments on Coarse-Grained Soil Using X-Ray CT and Finite Element Analysis

Authors: Mohamed Ali Abdennadher

Abstract:

Digital rock physics, an emerging field leveraging advanced imaging and numerical techniques, offers a promising approach to investigating the mechanical properties of granular materials without extensive physical experiments. This study focuses on using X-Ray Computed Tomography (CT) to capture the three-dimensional (3D) structure of coarse-grained soil at the particle level, combined with finite element analysis (FEA) to simulate the soil's behavior under compression. The primary goal is to establish a reliable virtual testing framework that can replicate laboratory results and offer deeper insights into soil mechanics. The methodology involves acquiring high-resolution CT scans of coarse-grained soil samples to visualize internal particle morphology. These CT images undergo processing through noise reduction, thresholding, and watershed segmentation techniques to isolate individual particles, preparing the data for subsequent analysis. A custom Python script is employed to extract particle shapes and conduct a statistical analysis of particle size distribution. The processed particle data then serves as the basis for creating a finite element model comprising approximately 500 particles subjected to one-dimensional compression. The FEA simulations explore the effects of mesh refinement and friction coefficient on stress distribution at grain contacts. A multi-layer meshing strategy is applied, featuring finer meshes at inter-particle contacts to accurately capture mechanical interactions and coarser meshes within particle interiors to optimize computational efficiency. Despite the known challenges in parallelizing FEA to high core counts, this study demonstrates that an appropriate domain-level parallelization strategy can achieve significant scalability, allowing simulations to extend to very high core counts. The results show a strong correlation between the finite element simulations and laboratory compression test data, validating the effectiveness of the virtual experiment approach. Detailed stress distribution patterns reveal that soil compression behavior is significantly influenced by frictional interactions, with frictional sliding, rotation, and rolling at inter-particle contacts being the primary deformation modes under low to intermediate confining pressures. These findings highlight that CT data analysis combined with numerical simulations offers a robust method for approximating soil behavior, potentially reducing the need for physical laboratory experiments.

Keywords: X-Ray computed tomography, finite element analysis, soil compression behavior, particle morphology

Procedia PDF Downloads 11
1657 The Effect of Vertical Integration on Operational Performance: Evaluating Physician Employment in Hospitals

Authors: Gary Young, David Zepeda, Gilbert Nyaga

Abstract:

This study investigated whether vertical integration of hospitals and physicians is associated with better care for patients with cardiac conditions. A dramatic change in the U.S. hospital industry is the integration of hospital and physicians through hospital acquisition of physician practices. Yet, there is little evidence regarding whether this form of vertical integration leads to better operational performance of hospitals. The study was conducted as an observational investigation based on a pooled, cross-sectional database. The study sample comprised over hospitals in the State of California. The time frame for the study was 2010 to 2012. The key performance measure was hospitals’ degree of compliance with performance criteria set out by the federal government for managing patients with cardiac conditions. These criteria relate to the types of clinical tests and medications that hospitals should follow for cardiac patients but hospital compliance requires the cooperation of a hospital’s physicians. Data for this measure was obtained from a federal website that presents performance scores for U.S. hospitals. The key independent variable was the percentage of cardiologists that a hospital employs (versus cardiologists who are affiliated but not employed by the hospital). Data for this measure was obtained from the State of California which requires hospitals to report financial and operation data each year including numbers of employed physicians. Other characteristics of hospitals (e.g., information technology for cardiac care, volume of cardiac patients) were also evaluated as possible complements or substitutes for physician employment by hospitals. Additional sources of data included the American Hospital Association and the U.S. Census. Empirical models were estimated with generalized estimating equations (GEE). Findings suggest that physician employment is positively associated with better hospital performance for cardiac care. However, findings also suggest that information technology is a substitute for physician employment.

Keywords: physician employment, hospitals, verical integration, cardiac care

Procedia PDF Downloads 392
1656 Urbanization and Income Inequality in Thailand

Authors: Acumsiri Tantikarnpanit

Abstract:

This paper aims to examine the relationship between urbanization and income inequality in Thailand during the period 2002–2020. Using a panel of data for 76 provinces collected from Thailand’s National Statistical Office (Labor Force Survey: LFS), as well as geospatial data from the U.S. Air Force Defense Meteorological Satellite Program (DMSP) and the Visible Infrared Imaging Radiometer Suite Day/Night band (VIIRS-DNB) satellite for nineteen selected years. This paper employs two different definitions to identify urban areas: 1) Urban areas defined by Thailand's National Statistical Office (Labor Force Survey: LFS), and 2) Urban areas estimated using nighttime light data from the DMSP and VIIRS-DNB satellite. The second method includes two sub-categories: 2.1) Determining urban areas by calculating nighttime light density with a population density of 300 people per square kilometer, and 2.2) Calculating urban areas based on nighttime light density corresponding to a population density of 1,500 people per square kilometer. The empirical analysis based on Ordinary Least Squares (OLS), fixed effects, and random effects models reveals a consistent U-shaped relationship between income inequality and urbanization. The findings from the econometric analysis demonstrate that urbanization or population density has a significant and negative impact on income inequality. Moreover, the square of urbanization shows a statistically significant positive impact on income inequality. Additionally, there is a negative association between logarithmically transformed income and income inequality. This paper also proposes the inclusion of satellite imagery, geospatial data, and spatial econometric techniques in future studies to conduct quantitative analysis of spatial relationships.

Keywords: income inequality, nighttime light, population density, Thailand, urbanization

Procedia PDF Downloads 71
1655 Anti-Inflammatory, Anti-Nociceptive and Anti-Arthritic Effects of Mirtazapine, Venalfaxine and Escitalopram in Rats

Authors: Sally A. El Awdan

Abstract:

Objective and Design: The purpose of this study was to evaluate the anti inflammatory, anti-arthritic and analgesic effects of antidepressants. Methods: Carrageenan model was used to assess effect on acute inflammation. Paw volume were measured at 1, 2, 3 and 4th hour post challenge. Anti-nociceptive effect was evaluated by hot plate method. Chronic inflammation was developed using Complete Freund's Adjuvant (CFA). The animals were injected with Freund’s adjuvant in sub-plantar tissue of the right posterior paw. Paw volume, ankle flexion scores, adjuvant-induced hyperalgesia and serum cytokine levels were assessed. Results: Results obtained demonstrate that mirtazapine, venalfaxine and escitalopram significantly and dose-dependently inhibited carrageenan-induced rat paw oedema. Mirtazapine, venalfaxine and escitalopram increased the reaction time of rats in hot plate test. We observed an increase in paw volume, ankle flexion scores, thermal hyperalgesia, serum levels of interleukin-1β, PGE2 and TNF-α, induced by intraplantar CFA injection. Regular treatment up to 28 days of adjuvant-induced arthritic rats with mirtazapine, venalfaxine and escitalopram showed anti anti-inflammatory and analgesic activities by suppressing the paw volume, recovering the paw withdrawal latency, and by inhibiting the ankle flexion scores in CFA-induced rats. In addition significant reduction in serum levels of interleukin-1β, PGE2 and TNF-α level in arthritic rats was reduced by treatment with drugs. Conclusion: These results suggest that antidepressants have significant anti-inflammatory and anti-nociceptive effects in acute and chronic models in rats, which may be associated with the reduction of interleukin-1β, PGE2 and TNF-α levels.

Keywords: antidepressants, carrageenan, anti-nociceptive, Complete Freund's Adjuvant

Procedia PDF Downloads 488
1654 Preparing Data for Calibration of Mechanistic-Empirical Pavement Design Guide in Central Saudi Arabia

Authors: Abdulraaof H. Alqaili, Hamad A. Alsoliman

Abstract:

Through progress in pavement design developments, a pavement design method was developed, which is titled the Mechanistic Empirical Pavement Design Guide (MEPDG). Nowadays, the evolution in roads network and highways is observed in Saudi Arabia as a result of increasing in traffic volume. Therefore, the MEPDG currently is implemented for flexible pavement design by the Saudi Ministry of Transportation. Implementation of MEPDG for local pavement design requires the calibration of distress models under the local conditions (traffic, climate, and materials). This paper aims to prepare data for calibration of MEPDG in Central Saudi Arabia. Thus, the first goal is data collection for the design of flexible pavement from the local conditions of the Riyadh region. Since, the modifying of collected data to input data is needed; the main goal of this paper is the analysis of collected data. The data analysis in this paper includes processing each: Trucks Classification, Traffic Growth Factor, Annual Average Daily Truck Traffic (AADTT), Monthly Adjustment Factors (MAFi), Vehicle Class Distribution (VCD), Truck Hourly Distribution Factors, Axle Load Distribution Factors (ALDF), Number of axle types (single, tandem, and tridem) per truck class, cloud cover percent, and road sections selected for the local calibration. Detailed descriptions of input parameters are explained in this paper, which leads to providing of an approach for successful implementation of MEPDG. Local calibration of MEPDG to the conditions of Riyadh region can be performed based on the findings in this paper.

Keywords: mechanistic-empirical pavement design guide (MEPDG), traffic characteristics, materials properties, climate, Riyadh

Procedia PDF Downloads 222
1653 Use of FWD in Determination of Bonding Condition of Semi-Rigid Asphalt Pavement

Authors: Nonde Lushinga, Jiang Xin, Danstan Chiponde, Lawrence P. Mutale

Abstract:

In this paper, falling weight deflectometer (FWD) was used to determine the bonding condition of a newly constructed semi-rigid base pavement. Using Evercal back-calculation computer programme, it was possible to quickly and accurately determine the structural condition of the pavement system of FWD test data. The bonding condition of the pavement layers was determined from calculated shear stresses and strains (relative horizontal displacements) on the interface of pavement layers from BISAR 3.0 pavement computer programmes. Thus, by using non-linear layered elastic theory, a pavement structure is analysed in the same way as other civil engineering structures. From non-destructive FWD testing, the required bonding condition of pavement layers was quantified from soundly based principles of Goodman’s constitutive models shown in equation 2, thereby producing the shear reaction modulus (Ks) which gives an indication of bonding state of pavement layers. Furthermore, a Tack coat failure Ratio (TFR) which has long being used in the USA in pavement evaluation was also used in the study in order to give validity to the study. According to research [39], the interface between two asphalt layers is determined by use of Tack Coat failure Ratio (TFR) which is the ratio of the stiffness of top layer asphalt layers over the stiffness of the second asphalt layer (E1/E2) in a slipped pavement. TFR gives an indication of the strength of the tack coat which is the main determinants of interlayer slipping. The criteria is that if the interface was in the state full bond, TFR would be greater or equals to 1 and that if the TFR was 0, meant full slip. Results of the calculations showed that TFR value was 1.81 which re-affirmed the position that the pavement under study was in the state of full bond because the value was greater than 1. It was concluded that FWD can be used to determine bonding condition of existing and newly constructed pavements.

Keywords: falling weight deflectometer (FWD), backcaluclation, semi-rigid base pavement, shear reaction modulus

Procedia PDF Downloads 508
1652 Topology Enhancement of a Straight Fin Using a Porous Media Computational Fluid Dynamics Simulation Approach

Authors: S. Wakim, M. Nemer, B. Zeghondy, B. Ghannam, C. Bouallou

Abstract:

Designing the optimal heat exchanger is still an essential objective to be achieved. Parametrical optimization involves the evaluation of the heat exchanger dimensions to find those that best satisfy certain objectives. This method contributes to an enhanced design rather than an optimized one. On the contrary, topology optimization finds the optimal structure that satisfies the design objectives. The huge development in metal additive manufacturing allowed topology optimization to find its way into engineering applications especially in the aerospace field to optimize metal structures. Using topology optimization in 3d heat and mass transfer problems requires huge computational time, therefore coupling it with CFD simulations can reduce this it. However, existed CFD models cannot be coupled with topology optimization. The CFD model must allow creating a uniform mesh despite the initial geometry complexity and also to swap the cells from fluid to solid and vice versa. In this paper, a porous media approach compatible with topology optimization criteria is developed. It consists of modeling the fluid region of the heat exchanger as porous media having high porosity and similarly the solid region is modeled as porous media having low porosity. The switching from fluid to solid cells required by topology optimization is simply done by changing each cell porosity using a user defined function. This model is tested on a plate and fin heat exchanger and validated by comparing its results to experimental data and simulations results. Furthermore, this model is used to perform a material reallocation based on local criteria to optimize a plate and fin heat exchanger under a constant heat duty constraint. The optimized fin uses 20% fewer materials than the first while the pressure drop is reduced by about 13%.

Keywords: computational methods, finite element method, heat exchanger, porous media, topology optimization

Procedia PDF Downloads 147