Search results for: interlaminar damage model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18308

Search results for: interlaminar damage model

8858 The Alarming Caesarean-Section Delivery Rate in Addis Ababa, Ethiopia

Authors: Yibeltal T. Bayou, Yohana S. Mashalla, Gloria Thupayagale-Tshweneagae

Abstract:

Background: According to the World Health Organization, caesarean section delivery rates of more than 10-15% caesarean section deliveries in any specific geographic region in the world are not justifiable. The aim of the study was to describe the level and analyse determinants of caesarean section delivery in Addis Ababa. Methods: Data was collected in Addis Ababa using a structured questionnaire administered to 901 women aged 15-49 years through a stratified two-stage cluster sampling technique. Binary logistic regression model was employed to identify predictors of caesarean section delivery. Results: Among the 835 women who delivered their last birth at healthcare facilities, 19.2% of them gave birth by caesarean section. About 9.0% of the caesarean section births were due to mother’s request or service provider’s influence without any medical indication. The caesarean section delivery rate was much higher than the recommended rate particularly among the non-slum residents (27.2%); clients of private healthcare facilities (41.1%); currently married women (20.6%); women with secondary (22.2%) and tertiary (33.6%) level of education; and women belonging to the highest wealth quintile household (28.2%). The majority (65.8%) of the caesarean section clients were not informed about the consequences of caesarean section delivery by service providers. The logistic regression model shows that older age (30-49), secondary and above education, non-slum residence, high-risk pregnancy and receiving adequate antenatal care were significantly positively associated with caesarean section delivery. Conclusion: Despite the unreserved effort towards achieving MDG 5 through safe skilled delivery assistance among others, the high caesarean section rate beyond the recommend limit, and the finding that caesarean sections done without medical indications were also alarming. The government and city administration should take appropriate measures before the problems become setbacks in healthcare provision. Further investigations should focus on the effect of caesarean section delivery on maternal and child health outcomes in the study area.

Keywords: Addis Ababa, caesarean section, mode of delivery, slum residence

Procedia PDF Downloads 391
8857 Optimizing Perennial Plants Image Classification by Fine-Tuning Deep Neural Networks

Authors: Khairani Binti Supyan, Fatimah Khalid, Mas Rina Mustaffa, Azreen Bin Azman, Amirul Azuani Romle

Abstract:

Perennial plant classification plays a significant role in various agricultural and environmental applications, assisting in plant identification, disease detection, and biodiversity monitoring. Nevertheless, attaining high accuracy in perennial plant image classification remains challenging due to the complex variations in plant appearance, the diverse range of environmental conditions under which images are captured, and the inherent variability in image quality stemming from various factors such as lighting conditions, camera settings, and focus. This paper proposes an adaptation approach to optimize perennial plant image classification by fine-tuning the pre-trained DNNs model. This paper explores the efficacy of fine-tuning prevalent architectures, namely VGG16, ResNet50, and InceptionV3, leveraging transfer learning to tailor the models to the specific characteristics of perennial plant datasets. A subset of the MYLPHerbs dataset consisted of 6 perennial plant species of 13481 images under various environmental conditions that were used in the experiments. Different strategies for fine-tuning, including adjusting learning rates, training set sizes, data augmentation, and architectural modifications, were investigated. The experimental outcomes underscore the effectiveness of fine-tuning deep neural networks for perennial plant image classification, with ResNet50 showcasing the highest accuracy of 99.78%. Despite ResNet50's superior performance, both VGG16 and InceptionV3 achieved commendable accuracy of 99.67% and 99.37%, respectively. The overall outcomes reaffirm the robustness of the fine-tuning approach across different deep neural network architectures, offering insights into strategies for optimizing model performance in the domain of perennial plant image classification.

Keywords: perennial plants, image classification, deep neural networks, fine-tuning, transfer learning, VGG16, ResNet50, InceptionV3

Procedia PDF Downloads 45
8856 The Effects of Perceived Service Quality on Customers' Satisfaction, Trust and Loyalty in Online Shopping: A Case of Saudi Consumers' Perspectives

Authors: Nawt Almutairi, Ramzi El-Haddadeh

Abstract:

With the extensive increase in the number of online shops, loyalty becomes the most purpose for e-retailers by which they can maintain their exit customers and regular income instead of spending large deal of money to target new segmentation. To obtain customers’ loyalty e-marketers should firstly satisfy customers by providing a high quality of services that could fulfil their demand. They have to satisfy them to trust the web-site then increase their intention to re-visit it. This study intends to investigate to what extend the elements of e-service quality presented in the literature affect customers’ satisfaction and how these influences contribute to customers’ trust and loyalty. Three dimensions of service quality are estimated. The first element is web-site interactivity, which is perceived the quality of interactive support and the accessible communications-tool. The second aspect is security/privacy, which is perceived the quality of controlling security and privacy while transaction over the web-site. The third element is web-design that perceived a pleasant user interface with visual appealing. These elements present positive effects on shoppers’ satisfaction. Thus, To examine the proposed constructs of this research, some measurements scale-items adapted from similar prior studies. Survey data collected online from Saudi customers (n=106) were utilized to test the research hypotheses. After that, the hypotheses were analyzed by using a variety of regression tools. The analytical results of this study propose that perceived quality of interactivity and security/privacy affects customers’ satisfaction. As well as trust seems to be a substantial construct that highly affects loyalty in online shopping. This study provides a developed model to obtain a simple understanding of the series of customers’ loyalty in online shopping. One construct presenting in the research model is web-design appears to be not important antecedent of satisfaction (the path to loyalty) in online shopping.

Keywords: e-service, satisfaction, trust, loyalty

Procedia PDF Downloads 240
8855 Method for Assessing Potential in Distribution Logistics

Authors: B. Groß, P. Fronia, P. Nyhuis

Abstract:

In addition to the production, which is already frequently optimized, improving the distribution logistics also opens up tremendous potential for increasing an enterprise’s competitiveness. Here too though, numerous interactions need to be taken into account, enterprises thus need to be able to identify and weigh between different potentials for economically efficient optimizations. In order to be able to assess potentials, enterprises require a suitable method. This paper first briefly presents the need for this research before introducing the procedure that will be used to develop an appropriate method that not only considers interactions but is also quickly and easily implemented.

Keywords: distribution logistics, evaluation of potential, methods, model

Procedia PDF Downloads 486
8854 The Environmental Impact Assessment of Land Use Planning (Case Study: Tannery Industry in Al-Garma District)

Authors: Husam Abdulmuttaleb Hashim

Abstract:

The environmental pollution problems represent a great challenge to the world, threatening to destroy all the evolution that mankind has reached, the organizations and associations that cares about environment are trying to warn the world from the forthcoming danger resulted from excessive use of nature resources and consuming it without looking to the damage happened as a result of unfair use of it. Most of the urban centers suffers from the environmental pollution problems and health, economic, and social dangers resulted from this pollution, and while the land use planning is responsible for distributing different uses in urban centers and controlling the interactions between these uses to reach a homogeneous and perfect state for the different activities in cities, the occurrence of environmental problems in the shade of existing land use planning operation refers to the disorder or insufficiency in this operation which leads to presence of such problems, and this disorder lays in lack of sufficient importance to the environmental considerations during the land use planning operations and setting up the master plan, so the research start to study this problem and finding solutions for it, the research assumes that using accurate and scientific methods in early stages of land use planning operation will prevent occurring of environmental pollution problems in the future, the research aims to study and show the importance of the environmental impact assessment method (EIA) as an important planning tool to investigate and predict the pollution ranges of the land use that has a polluting pattern in land use planning operation. This research encompasses the concept of environmental assessment and its kinds and clarifies environmental impact assessment and its contents, the research also dealt with urban planning concept and land use planning, it also dealt with the current situation of the case study (Al-Garma district) and the land use planning in it and explain the most polluting use on the environment which is the industrial land use represented in the tannery industries and then there was a stating of current situation of this land use and explaining its contents and environmental impacts resulted from it, and then we analyzed the tests applied by the researcher for water and soil, and perform environmental evaluation through applying environmental impact assessment matrix using the direct method to reveal the pollution ranges on the ambient environment of industrial land use, and we also applied the environmental and site limits and standards by using (GIS) and (AUTOCAD) to select the site of the best alternative of the industrial region in Al-Garma district after the research approved the unsuitability of its current site location for the environmental and site limitations, the research conducted some conclusions and recommendations regard clarifying the concluded facts and to set the proper solutions.

Keywords: EIA, pollution, tannery industry, land use planning

Procedia PDF Downloads 440
8853 Analysis Of Fine Motor Skills in Chronic Neurodegenerative Models of Huntington’s Disease and Amyotrophic Lateral Sclerosis

Authors: T. Heikkinen, J. Oksman, T. Bragge, A. Nurmi, O. Kontkanen, T. Ahtoniemi

Abstract:

Motor impairment is an inherent phenotypic feature of several chronic neurodegenerative diseases, and pharmacological therapies aimed to counterbalance the motor disability have a great market potential. Animal models of chronic neurodegenerative diseases display a number deteriorating motor phenotype during the disease progression. There is a wide array of behavioral tools to evaluate motor functions in rodents. However, currently existing methods to study motor functions in rodents are often limited to evaluate gross motor functions only at advanced stages of the disease phenotype. The most commonly applied traditional motor assays used in CNS rodent models, lack the sensitivity to capture fine motor impairments or improvements. Fine motor skill characterization in rodents provides a more sensitive tool to capture more subtle motor dysfunctions and therapeutic effects. Importantly, similar approach, kinematic movement analysis, is also used in clinic, and applied both in diagnosis and determination of therapeutic response to pharmacological interventions. The aim of this study was to apply kinematic gait analysis, a novel and automated high precision movement analysis system, to characterize phenotypic deficits in three different chronic neurodegenerative animal models, a transgenic mouse model (SOD1 G93A) for amyotrophic lateral sclerosis (ALS), and R6/2 and Q175KI mouse models for Huntington’s disease (HD). The readouts from walking behavior included gait properties with kinematic data, and body movement trajectories including analysis of various points of interest such as movement and position of landmarks in the torso, tail and joints. Mice (transgenic and wild-type) from each model were analyzed for the fine motor kinematic properties at young ages, prior to the age when gross motor deficits are clearly pronounced. Fine motor kinematic Evaluation was continued in the same animals until clear motor dysfunction with conventional motor assays was evident. Time course analysis revealed clear fine motor skill impairments in each transgenic model earlier than what is seen with conventional gross motor tests. Motor changes were quantitatively analyzed for up to ~80 parameters, and the largest data sets of HD models were further processed with principal component analysis (PCA) to transform the pool of individual parameters into a smaller and focused set of mutually uncorrelated gait parameters showing strong genotype difference. Kinematic fine motor analysis of transgenic animal models described in this presentation show that this method isa sensitive, objective and fully automated tool that allows earlier and more sensitive detection of progressive neuromuscular and CNS disease phenotypes. As a result of the analysis a comprehensive set of fine motor parameters for each model is created, and these parameters provide better understanding of the disease progression and enhanced sensitivity of this assay for therapeutic testing compared to classical motor behavior tests. In SOD1 G93A, R6/2, and Q175KI mice, the alterations in gait were evident already several weeks earlier than with traditional gross motor assays. Kinematic testing can be applied to a wider set of motor readouts beyond gait in order to study whole body movement patterns such as with relation to joints and various body parts longitudinally, providing a sophisticated and translatable method for disseminating motor components in rodent disease models and evaluating therapeutic interventions.

Keywords: Gait analysis, kinematic, motor impairment, inherent feature

Procedia PDF Downloads 340
8852 The Effect of Chloride Dioxide and High Concentration of CO2 Gas Injection on the Quality and Shelf-Life for Exporting Strawberry 'Maehyang' in Modified Atmosphere Condition

Authors: Hyuk Sung Yoon, In-Lee Choi, Mohammad Zahirul Islam, Jun Pill Baek, Ho-Min Kang

Abstract:

The strawberry ‘Maehyang’ cultivated in South Korea has been increased to export to Southeast Asia. The degradation of quality often occurs in strawberries during short export period. Botrytis cinerea has been known to cause major damage to the export strawberries and the disease was caused during shipping and distribution. This study was conducted to find out the sterilized effect of chlorine dioxide(ClO2) gas and high concentration of CO2 gas injection for ‘Maehyang’ strawberry and it was packaged with oxygen transmission rate (OTR) films. The strawberry was harvested at 80% color changed stage and packaged with OTR film and perforated film (control). The treatments were a MAP used by with 20,000 cc·m-2·day·atm OTR film and gas injection in packages. The gas type of ClO2 and CO2 were injected into OTR film packages, and treatments were 6 mg/L ClO2, 15% CO2, and they were combined. The treated strawberries were stored at 3℃ for 30 days. Fresh weight loss rate was less than 1% in all OTR film packages but it was more than 15% in a perforated film treatment that showed severe deterioration of visual quality during storage. Carbon dioxide concentration within a package showed approximately 15% of the maximum CO2 concentration in all treatments except control until the 21st day, it was the tolerated range of maximum CO2 concentration of strawberry in recommended CA or MA conditions. But, it increased to almost 50% on the 30th day. Oxygen concentration showed a decrease down to approximately 0% in all treatments except control for 25 days. Ethylene concentration was shown to be steady until the 17th day, but it quickly increased on the 17th day and dropped down on the final storage day (30th day). All treatments did not show any significant differences in gas treatments. Firmness increased in CO2 (15%) and ClO2 (6mg/L) + CO2 (15%) treatments during storage. It might be the effect of high concentration CO2 known by reducing decay and cell wall degradation. The soluble solid decreased in all treatments during storage. These results were caused to use up the sugar by the increase of respiration during storage. The titratable acidity showed a similarity in all treatments. Incidence of fungi was 0% in CO2 (15%) and ClO2 (6mg/L)+ CO2 (15%), but was more than 20% in a perforated film treatment. Consequently, The result indicates that Chloride Dioxide(ClO2) and high concentration of CO2 inhibited fungi growth. Due to the fact that fresh weight loss rate and incidence of fungi were lower, the ClO2(6mg/L)+ CO2(15%) prove to be most efficient in sterilization. These results suggest that Chloride Dioxide (ClO2) and high concentration of CO2 gas injection treatments were an effective decontamination technique for improving the safety of strawberries.

Keywords: chloride dioxide, high concentration of CO2, modified atmosphere condition, oxygen transmission rate films

Procedia PDF Downloads 330
8851 Effects of Plasma Technology in Biodegradable Films for Food Packaging

Authors: Viviane P. Romani, Bradley D. Olsen, Vilásia G. Martins

Abstract:

Biodegradable films for food packaging have gained growing attention due to environmental pollution caused by synthetic films and the interest in the better use of resources from nature. Important research advances were made in the development of materials from proteins, polysaccharides, and lipids. However, the commercial use of these new generation of sustainable materials for food packaging is still limited due to their low mechanical and barrier properties that could compromise the food quality and safety. Thus, strategies to improve the performance of these materials have been tested, such as chemical modifications, incorporation of reinforcing structures and others. Cold plasma is a versatile, fast and environmentally friendly technology. It consists of a partially ionized gas containing free electrons, ions, and radicals and neutral particles able to react with polymers and start different reactions, leading to the polymer degradation, functionalization, etching and/or cross-linking. In the present study, biodegradable films from fish protein prepared through the casting technique were plasma treated using an AC glow discharge equipment. The reactor was preliminary evacuated to ~7 Pa and the films were exposed to air plasma for 2, 5 and 8 min. The films were evaluated by their mechanical and water vapor permeability (WVP) properties and changes in the protein structure were observed using Scanning Electron Microscopy (SEM) and X-ray diffraction (XRD). Potential cross-links and elimination of surface defects by etching might be the reason for the increase in tensile strength and decrease in the elongation at break observed. Among the times of plasma application tested, no differences were observed when higher times of exposure were used. The X-ray pattern showed a broad peak at 2θ = 19.51º that corresponds to the distance of 4.6Å by applying the Bragg’s law. This distance corresponds to the average backbone distance within the α-helix. Thus, the changes observed in the films might indicate that the helical configuration of fish protein was disturbed by plasma treatment. SEM images showed surface damage in the films with 5 and 8 min of plasma treatment, indicating that 2 min was the most adequate time of treatment. It was verified that plasma removes water from the films once weight loss of 4.45% was registered for films treated during 2 min. However, after 24 h in 50% of relative humidity, the water lost was recovered. WVP increased from 0.53 to 0.65 g.mm/h.m².kPa after plasma treatment during 2 min, that is desired for some foods applications which require water passage through the packaging. In general, the plasma technology affects the properties and structure of fish protein films. Since this technology changes the surface of polymers, these films might be used to develop multilayer materials, as well as to incorporate active substances in the surface to obtain active packaging.

Keywords: fish protein films, food packaging, improvement of properties, plasma treatment

Procedia PDF Downloads 151
8850 Smart Container Farming: Innovative Urban Strawberry Farming Model from Japan to the World

Authors: Nishantha Giguruwa

Abstract:

This research investigates the transformative potential of smart container farming, building upon the successful cultivation of Japanese mushrooms at Sakai Farms in Aichi Prefecture, Japan, under the strategic collaboration with the Daikei Group. Inspired by this success, the study focuses on establishing an advanced urban strawberry farming laboratory with the aim of understanding strawberry farming technologies, fostering collaboration, and strategizing marketing approaches for both local and global markets. Positioned within the business framework of Sakai Farms and the Daikei Group, the study underscores the sustainability and forward-looking solutions offered by smart container farming in agriculture. The global significance of strawberries is emphasized, acknowledging their economic and cultural importance. The detailed examination of strawberry farming intricacies informs the technological framework developed for smart containers, implemented at Sakai Farms. Integral to this research is the incorporation of controlled bee pollination, a groundbreaking addition to the smart container farming model. The study anticipates future trends, outlining avenues for continuing exploration, stakeholder collaborations, policy considerations, and expansion strategies. Notably, the author expresses a strategic intent to approach the global market, leveraging the foreign student/faculty base at Ritsumeikan Asia Pacific University, where the author is affiliated. This unique approach aims to disseminate the research findings globally, contributing to the broader landscape of agricultural innovation. The integration of controlled bee pollination within this innovative framework not only enhances sustainability but also marks a significant stride in the evolution of urban agriculture, aligning with global agricultural trends.

Keywords: smart container farming, urban agriculture, strawberry farming technologies, controlled bee pollination, agricultural innovation

Procedia PDF Downloads 39
8849 Magnetohemodynamic of Blood Flow Having Impact of Radiative Flux Due to Infrared Magnetic Hyperthermia: Spectral Relaxation Approach

Authors: Ebenezer O. Ige, Funmilayo H. Oyelami, Joshua Olutayo-Irheren, Joseph T. Okunlola

Abstract:

Hyperthermia therapy is an adjuvant procedure during which perfused body tissues is subjected to elevated range of temperature in bid to achieve improved drug potency and efficacy of cancer treatment. While a selected class of hyperthermia techniques is shouldered on the thermal radiations derived from single-sourced electro-radiation measures, there are deliberations on conjugating dual radiation field sources in an attempt to improve the delivery of therapy procedure. This paper numerically explores the thermal effectiveness of combined infrared hyperemia having nanoparticle recirculation in the vicinity of imposed magnetic field on subcutaneous strata of a model lesion as ablation scheme. An elaborate Spectral relaxation method (SRM) was formulated to handle equation of coupled momentum and thermal equilibrium in the blood-perfused tissue domain of a spongy fibrous tissue. Thermal diffusion regimes in the presence of external magnetic field imposition were described leveraging on the renowned Roseland diffusion approximation to delineate the impact of radiative flux within the computational domain. The contribution of tissue sponginess was examined using mechanics of pore-scale porosity over a selected of clinical informed scenarios. Our observations showed for a substantial depth of spongy lesion, magnetic field architecture constitute the control regimes of hemodynamics in the blood-tissue interface while facilitating thermal transport across the depth of the model lesion. This parameter-indicator could be utilized to control the dispensing of hyperthermia treatment in intravenous perfused tissue.

Keywords: spectra relaxation scheme, thermal equilibrium, Roseland diffusion approximation, hyperthermia therapy

Procedia PDF Downloads 98
8848 3D Numerical Modelling of a Pulsed Pumping Process of a Large Dense Non-Aqueous Phase Liquid Pool: In situ Pilot-Scale Case Study of Hexachlorobutadiene in a Keyed Enclosure

Authors: Q. Giraud, J. Gonçalvès, B. Paris

Abstract:

Remediation of dense non-aqueous phase liquids (DNAPLs) represents a challenging issue because of their persistent behaviour in the environment. This pilot-scale study investigates, by means of in situ experiments and numerical modelling, the feasibility of the pulsed pumping process of a large amount of a DNAPL in an alluvial aquifer. The main compound of the DNAPL is hexachlorobutadiene, an emerging organic pollutant. A low-permeability keyed enclosure was built at the location of the DNAPL source zone in order to isolate a finite undisturbed volume of soil, and a 3-month pulsed pumping process was applied inside the enclosure to exclusively extract the DNAPL. The water/DNAPL interface elevation at both the pumping and observation wells and the cumulated pumped volume of DNAPL were also recorded. A total volume of about 20m³ of purely DNAPL was recovered since no water was extracted during the process. The three-dimensional and multiphase flow simulator TMVOC was used, and a conceptual model was elaborated and generated with the pre/post-processing tool mView. Numerical model consisted of 10 layers of variable thickness and 5060 grid cells. Numerical simulations reproduce the pulsed pumping process and show an excellent match between simulated, and field data of DNAPL cumulated pumped volume and a reasonable agreement between modelled and observed data for the evolution of the water/DNAPL interface elevations at the two wells. This study offers a new perspective in remediation since DNAPL pumping system optimisation may be performed where a large amount of DNAPL is encountered.

Keywords: dense non-aqueous phase liquid (DNAPL), hexachlorobutadiene, in situ pulsed pumping, multiphase flow, numerical modelling, porous media

Procedia PDF Downloads 166
8847 Evaluation of Hepatic Metabolite Changes for Differentiation Between Non-Alcoholic Steatohepatitis and Simple Hepatic Steatosis Using Long Echo-Time Proton Magnetic Resonance Spectroscopy

Authors: Tae-Hoon Kim, Kwon-Ha Yoon, Hong Young Jun, Ki-Jong Kim, Young Hwan Lee, Myeung Su Lee, Keum Ha Choi, Ki Jung Yun, Eun Young Cho, Yong-Yeon Jeong, Chung-Hwan Jun

Abstract:

Purpose: To assess the changes of hepatic metabolite for differentiation between non-alcoholic steatohepatitis (NASH) and simple steatosis on proton magnetic resonance spectroscopy (1H-MRS) in both humans and animal model. Methods: The local institutional review board approved this study and subjects gave written informed consent. 1H-MRS measurements were performed on a localized voxel of the liver using a point-resolved spectroscopy (PRESS) sequence and hepatic metabolites of alanine (Ala), lactate/triglyceride (Lac/TG), and TG were analyzed in NASH, simple steatosis and control groups. The group difference was tested with the ANOVA and Tukey’s post-hoc tests, and diagnostic accuracy was tested by calculating the area under the receiver operating characteristics (ROC) curve. The associations between metabolic concentration and pathologic grades or non-alcoholic fatty liver disease(NAFLD) activity scores were assessed by the Pearson’s correlation. Results: Patient with NASH showed the elevated Ala(p<0.001), Lac/TG(p < 0.001), TG(p < 0.05) concentration when compared with patients who had simple steatosis and healthy controls. The NASH patients were higher levels in Ala(mean±SEM, 52.5±8.3 vs 2.0±0.9; p < 0.001), Lac/TG(824.0±168.2 vs 394.1±89.8; p < 0.05) than simple steatosis. The area under the ROC curve to distinguish NASH from simple steatosis was 1.00 (95% confidence interval; 1.00, 1.00) with Ala and 0.782 (95% confidence interval; 0.61, 0.96) with Lac/TG. The Ala and Lac/TG levels were well correlated with steatosis grade, lobular inflammation, and NAFLD activity scores. The metabolic changes in human were reproducible to a mice model induced by streptozotocin injection and a high-fat diet. Conclusion: 1H-MRS would be useful for differentiation of patients with NASH and simple hepatic steatosis.

Keywords: non-alcoholic fatty liver disease, non-alcoholic steatohepatitis, 1H MR spectroscopy, hepatic metabolites

Procedia PDF Downloads 316
8846 Influence of Microparticles in the Contact Region of Quartz Sand Grains: A Micro-Mechanical Experimental Study

Authors: Sathwik Sarvadevabhatla Kasyap, Kostas Senetakis

Abstract:

The mechanical behavior of geological materials is very complex, and this complexity is related to the discrete nature of soils and rocks. Characteristics of a material at the grain scale such as particle size and shape, surface roughness and morphology, and particle contact interface are critical to evaluate and better understand the behavior of discrete materials. This study investigates experimentally the micro-mechanical behavior of quartz sand grains with emphasis on the influence of the presence of microparticles in their contact region. The outputs of the study provide some fundamental insights on the contact mechanics behavior of artificially coated grains and can provide useful input parameters in the discrete element modeling (DEM) of soils. In nature, the contact interfaces between real soil grains are commonly observed with microparticles. This is usually the case of sand-silt and sand-clay mixtures, where the finer particles may create a coating on the surface of the coarser grains, altering in this way the micro-, and thus the macro-scale response of geological materials. In this study, the micro-mechanical behavior of Leighton Buzzard Sand (LBS) quartz grains, with interference of different microparticles at their contact interfaces is studied in the laboratory using an advanced custom-built inter-particle loading apparatus. Special techniques were adopted to develop the coating on the surfaces of the quartz sand grains so that to establish repeatability of the coating technique. The characterization of the microstructure of coated particles on their surfaces was based on element composition analyses, microscopic images, surface roughness measurements, and single particle crushing strength tests. The mechanical responses such as normal and tangential load – displacement behavior, tangential stiffness behavior, and normal contact behavior under cyclic loading were studied. The behavior of coated LBS particles is compared among different classes of them and with pure LBS (i.e. surface cleaned to remove any microparticles). The damage on the surface of the particles was analyzed using microscopic images. Extended displacements in both normal and tangential directions were observed for coated LBS particles due to the plastic nature of the coating material and this varied with the variation of the amount of coating. The tangential displacement required to reach steady state was delayed due to the presence of microparticles in the contact region of grains under shearing. Increased tangential loads and coefficient of friction were observed for the coated grains in comparison to the uncoated quartz grains.

Keywords: contact interface, microparticles, micro-mechanical behavior, quartz sand

Procedia PDF Downloads 181
8845 The Use of Correlation Difference for the Prediction of Leakage in Pipeline Networks

Authors: Mabel Usunobun Olanipekun, Henry Ogbemudia Omoregbee

Abstract:

Anomalies such as water pipeline and hydraulic or petrochemical pipeline network leakages and bursts have significant implications for economic conditions and the environment. In order to ensure pipeline systems are reliable, they must be efficiently controlled. Wireless Sensor Networks (WSNs) have become a powerful network with critical infrastructure monitoring systems for water, oil and gas pipelines. The loss of water, oil and gas is inevitable and is strongly linked to financial costs and environmental problems, and its avoidance often leads to saving of economic resources. Substantial repair costs and the loss of precious natural resources are part of the financial impact of leaking pipes. Pipeline systems experts have implemented various methodologies in recent decades to identify and locate leakages in water, oil and gas supply networks. These methodologies include, among others, the use of acoustic sensors, measurements, abrupt statistical analysis etc. The issue of leak quantification is to estimate, given some observations about that network, the size and location of one or more leaks in a water pipeline network. In detecting background leakage, however, there is a greater uncertainty in using these methodologies since their output is not so reliable. In this work, we are presenting a scalable concept and simulation where a pressure-driven model (PDM) was used to determine water pipeline leakage in a system network. These pressure data were collected with the use of acoustic sensors located at various node points after a predetermined distance apart. We were able to determine with the use of correlation difference to determine the leakage point locally introduced at a predetermined point between two consecutive nodes, causing a substantial pressure difference between in a pipeline network. After de-noising the signal from the sensors at the nodes, we successfully obtained the exact point where we introduced the local leakage using the correlation difference model we developed.

Keywords: leakage detection, acoustic signals, pipeline network, correlation, wireless sensor networks (WSNs)

Procedia PDF Downloads 79
8844 Electrohydrodynamic Study of Microwave Plasma PECVD Reactor

Authors: Keltoum Bouherine, Olivier Leroy

Abstract:

The present work is dedicated to study a three–dimensional (3D) self-consistent fluid simulation of microwave discharges of argon plasma in PECVD reactor. The model solves the Maxwell’s equations, continuity equations for charged species and the electron energy balance equation, coupled with Poisson’s equation, and Navier-Stokes equations by finite element method, using COMSOL Multiphysics software. In this study, the simulations yield the profiles of plasma components as well as the charge densities and electron temperature, the electric field, the gas velocity, and gas temperature. The results show that the microwave plasma reactor is outside of local thermodynamic equilibrium.The present work is dedicated to study a three–dimensional (3D) self-consistent fluid simulation of microwave discharges of argon plasma in PECVD reactor. The model solves the Maxwell’s equations, continuity equations for charged species and the electron energy balance equation, coupled with Poisson’s equation, and Navier-Stokes equations by finite element method, using COMSOL Multiphysics software. In this study, the simulations yield the profiles of plasma components as well as the charge densities and electron temperature, the electric field, the gas velocity, and gas temperature. The results show that the microwave plasma reactor is outside of local thermodynamic equilibrium.

Keywords: electron density, electric field, microwave plasma reactor, gas velocity, non-equilibrium plasma

Procedia PDF Downloads 314
8843 Working with Interpreters: Using Role Play to Teach Social Work Students

Authors: Yuet Wah Echo Yeung

Abstract:

Working with people from minority ethnic groups, refugees and asylum seeking communities who have limited proficiency in the language of the host country often presents a major challenge for social workers. Because of language differences, social workers need to work with interpreters to ensure accurate information is collected for their assessment and intervention. Drawing from social learning theory, this paper discusses how role play was used as an experiential learning exercise in a training session to help social work students develop skills when working with interpreters. Social learning theory posits that learning is a cognitive process that takes place in a social context when people observe, imitate and model others’ behaviours. The roleplay also helped students understand the role of the interpreter and the challenges they may face when they rely on interpreters to communicate with service users and their family. The first part of the session involved role play. A tutor played the role of social worker and deliberately behaved in an unprofessional manner and used inappropriate body language when working alongside the interpreter during a home visit. The purpose of the roleplay is not to provide a positive role model for students to ‘imitate’ social worker’s behaviours. Rather it aims to active and provoke internal thinking process and encourages students to critically consider the impacts of poor practice on relationship building and the intervention process. Having critically reflected on the implications for poor practice, students were then asked to play the role of social worker and demonstrate what good practice should look like. At the end of the session, students remarked that they learnt a lot by observing the good and bad example; it showed them what not to do. The exercise served to remind students how practitioners can easily slip into bad habits and of the importance of respect for the cultural difference when working with people from different cultural backgrounds.

Keywords: role play, social learning theory, social work practice, working with interpreters

Procedia PDF Downloads 166
8842 Determinants of Investment in Vaca Muerta, Argentina

Authors: Ivan Poza Martínez

Abstract:

The international energy landscape has been significantly affected by the Covid-19 pandemic and te conflict in Ukraine. The Vaca Muerta sedimentary formation in Argentina´s Neuquén province has become a crucial area for energy production, specifically in the shale gas ad shale oil sectors. The massive investment required for theexploitation of this reserve make it essential to understand te determinants of the investment in the upstream sector at both local ad international levels. The aim of this study is to identify the qualitative and quantitative determinants of investment in Vaca Muerta. The research methodolody employs both quantiative ( econometrics ) and qualitative approaches. A linear regression model is used to analyze the impact in non-conventional hydrocarbons. The study highlights that, in addition to quantitative factors, qualitative variables, particularly the design of a regulatory framework, significantly influence the level of the investment in Vaca Muerta. The analysis reveals the importance of attracting both domestic and foreign capital investment. This research contributes to understanding the factors influencing investment inthe Vaca Muerta regioncomapred to other published studies. It emphasizes to role of qualitative varibles, such as regulatory frameworks, in the development of the shale gas and oil sectors. The study uses a combination ofquantitative data , such a investment figures, and qualitative data, such a regulatory frameworks. The data is collected from various rpeorts and industry publications. The linear regression model is used to analyze the relationship between the variables and the investment in Vaca Muerta. The research addresses the question of what factors drive investment in the Vaca Muerta region, both from a quantitative and qualitative perspective. The study concludes that a combination of quantitative and qualitative factors, including the design of a regulatory framework, plays a significant role in attracting investment in Vaca Muerta. It highlights the importance of these determinants in the developmentof the local energy sector and the potential economic benefits for Argentina and the Southern Cone region.

Keywords: vaca muerta, FDI, shale gas, shale oil, YPF

Procedia PDF Downloads 38
8841 Statistical Modeling and by Artificial Neural Networks of Suspended Sediment Mina River Watershed at Wadi El-Abtal Gauging Station (Northern Algeria)

Authors: Redhouane Ghernaout, Amira Fredj, Boualem Remini

Abstract:

Suspended sediment transport is a serious problem worldwide, but it is much more worrying in certain regions of the world, as is the case in the Maghreb and more particularly in Algeria. It continues to take disturbing proportions in Northern Algeria due to the variability of rains in time and in space and constant deterioration of vegetation. Its prediction is essential in order to identify its intensity and define the necessary actions for its reduction. The purpose of this study is to analyze the concentration data of suspended sediment measured at Wadi El-Abtal Hydrometric Station. It also aims to find and highlight regressive power relationships, which can explain the suspended solid flow by the measured liquid flow. The study strives to find models of artificial neural networks linking the flow, month and precipitation parameters with solid flow. The obtained results show that the power function of the solid transport rating curve and the models of artificial neural networks are appropriate methods for analysing and estimating suspended sediment transport in Wadi Mina at Wadi El-Abtal Hydrometric Station. They made it possible to identify in a fairly conclusive manner the model of neural networks with four input parameters: the liquid flow Q, the month and the daily precipitation measured at the representative stations (Frenda 013002 and Ain El-Hadid 013004 ) of the watershed. The model thus obtained makes it possible to estimate the daily solid flows (interpolate and extrapolate) even beyond the period of observation of solid flows (1985/86 to 1999/00), given the availability of the average daily liquid flows and daily precipitation since 1953/1954.

Keywords: suspended sediment, concentration, regression, liquid flow, solid flow, artificial neural network, modeling, mina, algeria

Procedia PDF Downloads 83
8840 Improvement of Environment and Climate Change Canada’s Gem-Hydro Streamflow Forecasting System

Authors: Etienne Gaborit, Dorothy Durnford, Daniel Deacu, Marco Carrera, Nathalie Gauthier, Camille Garnaud, Vincent Fortin

Abstract:

A new experimental streamflow forecasting system was recently implemented at the Environment and Climate Change Canada’s (ECCC) Canadian Centre for Meteorological and Environmental Prediction (CCMEP). It relies on CaLDAS (Canadian Land Data Assimilation System) for the assimilation of surface variables, and on a surface prediction system that feeds a routing component. The surface energy and water budgets are simulated with the SVS (Soil, Vegetation, and Snow) Land-Surface Scheme (LSS) at 2.5-km grid spacing over Canada. The routing component is based on the Watroute routing scheme at 1-km grid spacing for the Great Lakes and Nelson River watersheds. The system is run in two distinct phases: an analysis part and a forecast part. During the analysis part, CaLDAS outputs are used to force the routing system, which performs streamflow assimilation. In forecast mode, the surface component is forced with the Canadian GEM atmospheric forecasts and is initialized with a CaLDAS analysis. Streamflow performances of this new system are presented over 2019. Performances are compared to the current ECCC’s operational streamflow forecasting system, which is different from the new experimental system in many aspects. These new streamflow forecasts are also compared to persistence. Overall, the new streamflow forecasting system presents promising results, highlighting the need for an elaborated assimilation phase before performing the forecasts. However, the system is still experimental and is continuously being improved. Some major recent improvements are presented here and include, for example, the assimilation of snow cover data from remote sensing, a backward propagation of assimilated flow observations, a new numerical scheme for the routing component, and a new reservoir model.

Keywords: assimilation system, distributed physical model, offline hydro-meteorological chain, short-term streamflow forecasts

Procedia PDF Downloads 118
8839 Pharmacophore-Based Modeling of a Series of Human Glutaminyl Cyclase Inhibitors to Identify Lead Molecules by Virtual Screening, Molecular Docking and Molecular Dynamics Simulation Study

Authors: Ankur Chaudhuri, Sibani Sen Chakraborty

Abstract:

In human, glutaminyl cyclase activity is highly abundant in neuronal and secretory tissues and is preferentially restricted to hypothalamus and pituitary. The N-terminal modification of β-amyloids (Aβs) peptides by the generation of a pyro-glutamyl (pGlu) modified Aβs (pE-Aβs) is an important process in the initiation of the formation of neurotoxic plaques in Alzheimer’s disease (AD). This process is catalyzed by glutaminyl cyclase (QC). The expression of QC is characteristically up-regulated in the early stage of AD, and the hallmark of the inhibition of QC is the prevention of the formation of pE-Aβs and plaques. A computer-aided drug design (CADD) process was employed to give an idea for the designing of potentially active compounds to understand the inhibitory potency against human glutaminyl cyclase (QC). This work elaborates the ligand-based and structure-based pharmacophore exploration of glutaminyl cyclase (QC) by using the known inhibitors. Three dimensional (3D) quantitative structure-activity relationship (QSAR) methods were applied to 154 compounds with known IC50 values. All the inhibitors were divided into two sets, training-set, and test-sets. Generally, training-set was used to build the quantitative pharmacophore model based on the principle of structural diversity, whereas the test-set was employed to evaluate the predictive ability of the pharmacophore hypotheses. A chemical feature-based pharmacophore model was generated from the known 92 training-set compounds by HypoGen module implemented in Discovery Studio 2017 R2 software package. The best hypothesis was selected (Hypo1) based upon the highest correlation coefficient (0.8906), lowest total cost (463.72), and the lowest root mean square deviation (2.24Å) values. The highest correlation coefficient value indicates greater predictive activity of the hypothesis, whereas the lower root mean square deviation signifies a small deviation of experimental activity from the predicted one. The best pharmacophore model (Hypo1) of the candidate inhibitors predicted comprised four features: two hydrogen bond acceptor, one hydrogen bond donor, and one hydrophobic feature. The Hypo1 was validated by several parameters such as test set activity prediction, cost analysis, Fischer's randomization test, leave-one-out method, and heat map of ligand profiler. The predicted features were then used for virtual screening of potential compounds from NCI, ASINEX, Maybridge and Chembridge databases. More than seven million compounds were used for this purpose. The hit compounds were filtered by drug-likeness and pharmacokinetics properties. The selective hits were docked to the high-resolution three-dimensional structure of the target protein glutaminyl cyclase (PDB ID: 2AFU/2AFW) to filter these hits further. To validate the molecular docking results, the most active compound from the dataset was selected as a reference molecule. From the density functional theory (DFT) study, ten molecules were selected based on their highest HOMO (highest occupied molecular orbitals) energy and the lowest bandgap values. Molecular dynamics simulations with explicit solvation systems of the final ten hit compounds revealed that a large number of non-covalent interactions were formed with the binding site of the human glutaminyl cyclase. It was suggested that the hit compounds reported in this study could help in future designing of potent inhibitors as leads against human glutaminyl cyclase.

Keywords: glutaminyl cyclase, hit lead, pharmacophore model, simulation

Procedia PDF Downloads 123
8838 LHCII Proteins Phosphorylation Changes Involved in the Dark-Chilling Response in Plant Species with Different Chilling Tolerance

Authors: Malgorzata Krysiak, Anna Wegrzyn, Maciej Garstka, Radoslaw Mazur

Abstract:

Under constantly fluctuating environmental conditions, the thylakoid membrane protein network evolved the ability to dynamically respond to changing biotic and abiotic factors. One of the most important protective mechanism is rearrangement of the chlorophyll-protein (CP) complexes, induced by protein phosphorylation. In a temperate climate, low temperature is one of the abiotic stresses that heavily affect plant growth and productivity. The aim of this study was to determine the role of LHCII antenna complex phosphorylation in the dark-chilling response. The study included an experimental model based on dark-chilling at 4 °C of detached chilling sensitive (CS) runner bean (Phaseolus coccineus L.) and chilling tolerant (CT) garden pea (Pisum sativum L.) leaves. This model is well described in the literature as used for the analysis of chilling impact without any additional effects caused by light. We examined changes in thylakoid membrane protein phosphorylation, interactions between phosphorylated LHCII (P-LHCII) and CP complexes, and their impact on the dynamics of photosystem II (PSII) under dark-chilling conditions. Our results showed that the dark-chilling treatment of CS bean leaves induced a substantial increase of phosphorylation of LHCII proteins, as well as changes in CP complexes composition and their interaction with P-LHCII. The PSII photochemical efficiency measurements showed that in bean, PSII is overloaded with light energy, which is not compensated by CP complexes rearrangements. On the contrary, no significant changes in PSII photochemical efficiency, phosphorylation pattern and CP complexes interactions were observed in CT pea. In conclusion, our results indicate that different responses of the LHCII phosphorylation to chilling stress take place in CT and CS plants, and that kinetics of LHCII phosphorylation and interactions of P-LHCII with photosynthetic complexes may be crucial to chilling stress response. Acknowledgments: presented work was financed by the National Science Centre, Poland grant No.: 2016/23/D/NZ3/01276

Keywords: LHCII, phosphorylation, chilling stress, pea, runner bean

Procedia PDF Downloads 123
8837 Numerical Simulation of Convective and Transport Processes in the Nocturnal Atmospheric Surface Layer

Authors: K. R. Sreenivas, Shaurya Kaushal

Abstract:

After sunset, under calm & clear-sky nocturnal conditions, the air layer near the surface containing aerosols cools through radiative processes to the upper atmosphere. Due to this cooling, surface air-layer temperature can fall 2-6 degrees C lower than the ground-surface temperature. This unstable convection layer, on the top, is capped by a stable inversion-boundary layer. Radiative divergence, along with the convection within the surface layer, governs the vertical transport of heat and moisture. Micro-physics in this layer have implications for the occurrence and growth of the fog layer. This particular configuration, featuring a convective mixed layer beneath a stably stratified inversion layer, exemplifies a classic case of penetrative convection. In this study, we conduct numerical simulations of the penetrative convection phenomenon within the nocturnal atmospheric surface layer and elucidate its relevance to the dynamics of fog layers. We employ field and laboratory measurements of aerosol number density to model the strength of the radiative cooling. Our analysis encompasses horizontally averaged, vertical profiles of temperature, density, and heat flux. The energetic incursion of the air from the mixed layer into the stable inversion layer across the interface results in entrainment and the growth of the mixed layer, modeling of which is the key focus of our investigation. In our research, we ascertain the appropriate length scale to employ in the Richardson number correlation, which allows us to estimate the entrainment rate and model the growth of the mixed layer. Our analysis of the mixed layer and the entrainment zone reveals a close alignment with previously reported laboratory experiments on penetrative convection. Additionally, we demonstrate how aerosol number density influences the growth or decay of the mixed layer. Furthermore, our study suggests that the presence of fog near the ground surface can induce extensive vertical mixing, a phenomenon observed in field experiments.

Keywords: inversion layer, penetrative convection, radiative cooling, fog occurrence

Procedia PDF Downloads 57
8836 Thermal Hydraulic Analysis of Sub-Channels of Pressurized Water Reactors with Hexagonal Array: A Numerical Approach

Authors: Md. Asif Ullah, M. A. R. Sarkar

Abstract:

This paper illustrates 2-D and 3-D simulations of sub-channels of a Pressurized Water Reactor (PWR) having hexagonal array of fuel rods. At a steady state, the temperature of outer surface of the cladding of fuel rod is kept about 1200°C. The temperature of this isothermal surface is taken as boundary condition for simulation. Water with temperature of 290°C is given as a coolant inlet to the primary water circuit which is pressurized upto 157 bar. Turbulent flow of pressurized water is used for heat removal. In 2-D model, temperature, velocity, pressure and Nusselt number distributions are simulated in a vertical sectional plane through the sub-channels of a hexagonal fuel rod assembly. Temperature, Nusselt number and Y-component of convective heat flux along a line in this plane near the end of fuel rods are plotted for different Reynold’s number. A comparison between X-component and Y-component of convective heat flux in this vertical plane is analyzed. Hexagonal fuel rod assembly has three types of sub-channels according to geometrical shape whose boundary conditions are different too. In 3-D model, temperature, velocity, pressure, Nusselt number, total heat flux magnitude distributions for all the three sub-channels are studied for a suitable Reynold’s number. A horizontal sectional plane is taken from each of the three sub-channels to study temperature, velocity, pressure, Nusselt number and convective heat flux distribution in it. Greater values of temperature, Nusselt number and Y-component of convective heat flux are found for greater Reynold’s number. X-component of convective heat flux is found to be non-zero near the bottom of fuel rod and zero near the end of fuel rod. This indicates that the convective heat transfer occurs totally along the direction of flow near the outlet. As, length to radius ratio of sub-channels is very high, simulation for a short length of the sub-channels are done for graphical interface advantage. For the simulations, Turbulent Flow (K-Є ) module and Heat Transfer in Fluids (ht) module of COMSOL MULTIPHYSICS 5.0 are used.

Keywords: sub-channels, Reynold’s number, Nusselt number, convective heat transfer

Procedia PDF Downloads 352
8835 Determination of Stress-Strain Curve of Duplex Stainless Steel Welds

Authors: Carolina Payares-Asprino

Abstract:

Dual-phase duplex stainless steel comprised of ferrite and austenite has shown high strength and corrosion resistance in many aggressive environments. Joining duplex alloys is challenging due to several embrittling precipitates and metallurgical changes during the welding process. The welding parameters strongly influence the quality of a weld joint. Therefore, it is necessary to quantify the weld bead’s integral properties as a function of welding parameters, especially when part of the weld bead is removed through a machining process due to aesthetic reasons or to couple the elements in the in-service structure. The present study uses the existing stress-strain model to predict the stress-strain curves for duplex stainless-steel welds under different welding conditions. Having mathematical expressions that predict the shape of the stress-strain curve is advantageous since it reduces the experimental work in obtaining the tensile test. In analysis and design, such stress-strain modeling simplifies the time of operations by being integrated into calculation tools, such as the finite element program codes. The elastic zone and the plastic zone of the curve can be defined by specific parameters, generating expressions that simulate the curve with great precision. There are empirical equations that describe the stress-strain curves. However, they only refer to the stress-strain curve for the stainless steel, but not when the material is under the welding process. It is a significant contribution to the applications of duplex stainless steel welds. For this study, a 3x3 matrix with a low, medium, and high level for each of the welding parameters were applied, giving a total of 27 weld bead plates. Two tensile specimens were manufactured from each welded plate, resulting in 54 tensile specimens for testing. When evaluating the four models used to predict the stress-strain curve in the welded specimens, only one model (Rasmussen) presented a good correlation in predicting the strain stress curve.

Keywords: duplex stainless steels, modeling, stress-stress curve, tensile test, welding

Procedia PDF Downloads 155
8834 Numerical Analysis of Gas-Particle Mixtures through Pipelines

Authors: G. Judakova, M. Bause

Abstract:

The ability to model and simulate numerically natural gas flow in pipelines has become of high importance for the design of pipeline systems. The understanding of the formation of hydrate particles and their dynamical behavior is of particular interest, since these processes govern the operation properties of the systems and are responsible for system failures by clogging of the pipelines under certain conditions. Mathematically, natural gas flow can be described by multiphase flow models. Using the two-fluid modeling approach, the gas phase is modeled by the compressible Euler equations and the particle phase is modeled by the pressureless Euler equations. The numerical simulation of compressible multiphase flows is an important research topic. It is well known that for nonlinear fluxes, even for smooth initial data, discontinuities in the solution are likely to occur in finite time. They are called shock waves or contact discontinuities. For hyperbolic and singularly perturbed parabolic equations the standard application of the Galerkin finite element method (FEM) leads to spurious oscillations (e.g. Gibb's phenomenon). In our approach, we use stabilized FEM, the streamline upwind Petrov-Galerkin (SUPG) method, where artificial diffusion acting only in the direction of the streamlines and using a special treatment of the boundary conditions in inviscid convective terms, is added. Numerical experiments show that the numerical solution obtained and stabilized by SUPG captures discontinuities or steep gradients of the exact solution in layers. However, within this layer the approximate solution may still exhibit overshoots or undershoots. To suitably reduce these artifacts we add a discontinuity capturing or shock capturing term. The performance properties of our numerical scheme are illustrated for two-phase flow problem.

Keywords: two-phase flow, gas-particle mixture, inviscid two-fluid model, euler equation, finite element method, streamline upwind petrov-galerkin, shock capturing

Procedia PDF Downloads 298
8833 An Inventory Management Model to Manage the Stock Level for Irregular Demand Items

Authors: Riccardo Patriarca, Giulio Di Gravio, Francesco Costantino, Massimo Tronci

Abstract:

An accurate inventory management policy acquires a crucial role in the several high-availability sectors. In these sectors, due to the high-cost of spares and backorders, an (S-1, S) replenishment policy is necessary for high-availability items. The policy enables the shipment of a substitute efficient item anytime the inventory size decreases by one. This policy can be modelled following the Multi-Echelon Technique for Recoverable Item Control (METRIC). The METRIC is a system-based technique that allows defining the optimum stock level in a multi-echelon network, adopting measures in line with the decision-maker’s perspective. The METRIC defines an availability-cost function with inventory costs and required service levels, using as inputs data about the demand trend, the supplying and maintenance characteristics of the network and the budget/availability constraints. The traditional METRIC relies on the hypothesis that a Poisson distribution well represents the demand distribution in case of items with a low failure rate. However, in this research, we will explore the effects of using a Poisson distribution to model the demand of low failure rate items characterized by an irregular demand trend. This characteristic of a demand is not included in the traditional METRIC formulation leading to the need of revising its traditional formulation. Using the CV (Coefficient of Variation) and ADI (Average inter-Demand Interval) classification, we will define the inherent flaws of Poisson-based METRIC for irregular demand items, defining an innovative ad hoc distribution which can better fit the irregular demands. This distribution will allow defining proper stock levels to reduce stocking and backorder costs due to the high irregularities in the demand trend. A case study in the aviation domain will clarify the benefits of this innovative METRIC approach.

Keywords: METRIC, inventory management, irregular demand, spare parts

Procedia PDF Downloads 331
8832 The Influences of Facies and Fine Kaolinite Formation Migration on Sandstone's Reservoir Quality, Sarir Formation, Sirt Basin Libya

Authors: Faraj M. Elkhatri

Abstract:

The spatial and temporal distribution of diagenetic alterations related impact on the reservoir quality of the Sarir Formation. ( present day burial depth of about 9000 feet) Depositional facies and diagenetic alterations are the main controls on reservoir quality of Sarir Formation Sirt Basin Libya; these based on lithology and grain size as well as authigenic clay mineral types and their distributions. However, petrology investigation obtained on study area with five sandstone wells concentrated on main rock components and the parameters that may have impacts on reservoirs. the main authigenic clay minerals are kaolinite and dickite, these investigations have confirmed by X.R.D analysis and clay fraction. mainly Kaolinite and Dickite were extensively presented on all of wells with high amounts. As well as trace of detrital smectite and less amounts of illitized mud-matrix are possibly find by SEM image. Thin layers of clay presented as clay-grain coatings in local depth interpreted as remains of dissolved clay matrix is partly transformed into kaolinite adjacent and towards pore throat. This also may have impacts on most of the pore throats of this sandstone which are open and relatively clean with some fine martial have been formed on occluded pores. This material is identified by EDS analysis to be collections of not only kaolinite booklets but also small disaggregated kaolinite platelets derived from the disaggregation of larger kaolinite booklets. These patches of kaolinite not only fill this pore but also coat some of the surrounding framework grains. Quartz grains often enlarged by authigenic quartz overgrowths partially occlude and reduce porosity. Scanning Electron Microscopy with Energy Dispersive Spectroscopy (SEM) was conducted on the post-test samples to examine any mud filtrate particles that may be in the pore throats. Semi-qualitative elemental data on selected minerals observed during the SEM study were obtained through the use of an Energy Dispersive Spectroscopy (EDS) unit. The samples showed mostly clean open pore throats with limited occlusion by kaolinite. very fine-grained elemental combinations (Si/Al/Na/Cl, Si/Al Ca/Cl/Ti, and Qtz/Ti) have been identified and conformed by EDS analysis. However, the identification of the fine grained disaggregated material as mainly kaolinite though study area.

Keywords: pore throat, fine migration, formation damage, solids plugging, porosity loss

Procedia PDF Downloads 139
8831 Statistical Analysis to Compare between Smart City and Traditional Housing

Authors: Taha Anjamrooz, Sareh Rajabi, Ayman Alzaatreh

Abstract:

Smart cities are playing important roles in real life. Integration and automation between different features of modern cities and information technologies improve smart city efficiency, energy management, human and equipment resource management, life quality and better utilization of resources for the customers. One of difficulties in this path, is use, interface and link between software, hardware, and other IT technologies to develop and optimize processes in various business fields such as construction, supply chain management and transportation in parallel to cost-effective and resource reduction impacts. Also, Smart cities are certainly intended to demonstrate a vital role in offering a sustainable and efficient model for smart houses while mitigating environmental and ecological matters. Energy management is one of the most important matters within smart houses in the smart cities and communities, because of the sensitivity of energy systems, reduction in energy wastage and maximization in utilizing the required energy. Specially, the consumption of energy in the smart houses is important and considerable in the economic balance and energy management in smart city as it causes significant increment in energy-saving and energy-wastage reduction. This research paper develops features and concept of smart city in term of overall efficiency through various effective variables. The selected variables and observations are analyzed through data analysis processes to demonstrate the efficiency of smart city and compare the effectiveness of each variable. There are ten chosen variables in this study to improve overall efficiency of smart city through increasing effectiveness of smart houses using an automated solar photovoltaic system, RFID System, smart meter and other major elements by interfacing between software and hardware devices as well as IT technologies. Secondly to enhance aspect of energy management by energy-saving within smart house through efficient variables. The main objective of smart city and smart houses is to reproduce energy and increase its efficiency through selected variables with a comfortable and harmless atmosphere for the customers within a smart city in combination of control over the energy consumption in smart house using developed IT technologies. Initially the comparison between traditional housing and smart city samples is conducted to indicate more efficient system. Moreover, the main variables involved in measuring overall efficiency of system are analyzed through various processes to identify and prioritize the variables in accordance to their influence over the model. The result analysis of this model can be used as comparison and benchmarking with traditional life style to demonstrate the privileges of smart cities. Furthermore, due to expensive and expected shortage of natural resources in near future, insufficient and developed research study in the region, and available potential due to climate and governmental vision, the result and analysis of this study can be used as key indicator to select most effective variables or devices during construction phase and design

Keywords: smart city, traditional housing, RFID, photovoltaic system, energy efficiency, energy saving

Procedia PDF Downloads 100
8830 Impact of Displacements Durations and Monetary Costs on the Labour Market within a City Consisting on Four Areas a Theoretical Approach

Authors: Aboulkacem El Mehdi

Abstract:

We develop a theoretical model at the crossroads of labour and urban economics, used for explaining the mechanism through which the duration of home-workplace trips and their monetary costs impact the labour demand and supply in a spatially scattered labour market and how they are impacted by a change in passenger transport infrastructures and services. The spatial disconnection between home and job opportunities is referred to as the spatial mismatch hypothesis (SMH). Its harmful impact on employment has been subject to numerous theoretical propositions. However, all the theoretical models proposed so far are patterned around the American context, which is particular as it is marked by racial discrimination against blacks in the housing and the labour markets. Therefore, it is only natural that most of these models are developed in order to reproduce a steady state characterized by agents carrying out their economic activities in a mono-centric city in which most unskilled jobs being created in the suburbs, far from the Blacks who dwell in the city-centre, generating a high unemployment rates for blacks, while the White population resides in the suburbs and has a low unemployment rate. Our model doesn't rely on any racial discrimination and doesn't aim at reproducing a steady state in which these stylized facts are replicated; it takes the main principle of the SMH -the spatial disconnection between homes and workplaces- as a starting point. One of the innovative aspects of the model consists in dealing with a SMH related issue at an aggregate level. We link the parameters of the passengers transport system to employment in the whole area of a city. We consider here a city that consists of four areas: two of them are residential areas with unemployed workers, the other two host firms looking for labour force. The workers compare the indirect utility of working in each area with the utility of unemployment and choose between submitting an application for the job that generate the highest indirect utility or not submitting. This arbitration takes account of the monetary and the time expenditures generated by the trips between the residency areas and the working areas. Each of these expenditures is clearly and explicitly formulated so that the impact of each of them can be studied separately than the impact of the other. The first findings show that the unemployed workers living in an area benefiting from good transport infrastructures and services have a better chance to prefer activity to unemployment and are more likely to supply a higher 'quantity' of labour than those who live in an area where the transport infrastructures and services are poorer. We also show that the firms located in the most accessible area receive much more applications and are more likely to hire the workers who provide the highest quantity of labour than the firms located in the less accessible area. Currently, we are working on the matching process between firms and job seekers and on how the equilibrium between the labour demand and supply occurs.

Keywords: labour market, passenger transport infrastructure, spatial mismatch hypothesis, urban economics

Procedia PDF Downloads 274
8829 An in silico Approach for Exploring the Intercellular Communication in Cancer Cells

Authors: M. Cardenas-Garcia, P. P. Gonzalez-Perez

Abstract:

Intercellular communication is a necessary condition for cellular functions and it allows a group of cells to survive as a population. Throughout this interaction, the cells work in a coordinated and collaborative way which facilitates their survival. In the case of cancerous cells, these take advantage of intercellular communication to preserve their malignancy, since through these physical unions they can send signs of malignancy. The Wnt/β-catenin signaling pathway plays an important role in the formation of intercellular communications, being also involved in a large number of cellular processes such as proliferation, differentiation, adhesion, cell survival, and cell death. The modeling and simulation of cellular signaling systems have found valuable support in a wide range of modeling approaches, which cover a wide spectrum ranging from mathematical models; e.g., ordinary differential equations, statistical methods, and numerical methods– to computational models; e.g., process algebra for modeling behavior and variation in molecular systems. Based on these models, different simulation tools have been developed from mathematical ones to computational ones. Regarding cellular and molecular processes in cancer, its study has also found a valuable support in different simulation tools that, covering a spectrum as mentioned above, have allowed the in silico experimentation of this phenomenon at the cellular and molecular level. In this work, we simulate and explore the complex interaction patterns of intercellular communication in cancer cells using the Cellulat bioinformatics tool, a computational simulation tool developed by us and motivated by two key elements: 1) a biochemically inspired model of self-organizing coordination in tuple spaces, and 2) the Gillespie’s algorithm, a stochastic simulation algorithm typically used to mimic systems of chemical/biochemical reactions in an efficient and accurate way. The main idea behind the Cellulat simulation tool is to provide an in silico experimentation environment that complements and guides in vitro experimentation in intra and intercellular signaling networks. Unlike most of the cell signaling simulation tools, such as E-Cell, BetaWB and Cell Illustrator which provides abstractions to model only intracellular behavior, Cellulat is appropriate for modeling both intracellular signaling and intercellular communication, providing the abstractions required to model –and as a result, simulate– the interaction mechanisms that involve two or more cells, that is essential in the scenario discussed in this work. During the development of this work we made evident the application of our computational simulation tool (Cellulat) for the modeling and simulation of intercellular communication between normal and cancerous cells, and in this way, propose key molecules that may prevent the arrival of malignant signals to the cells that surround the tumor cells. In this manner, we could identify the significant role that has the Wnt/β-catenin signaling pathway in cellular communication, and therefore, in the dissemination of cancer cells. We verified, using in silico experiments, how the inhibition of this signaling pathway prevents that the cells that surround a cancerous cell are transformed.

Keywords: cancer cells, in silico approach, intercellular communication, key molecules, modeling and simulation

Procedia PDF Downloads 240