Search results for: adaptive thermal comfort model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20429

Search results for: adaptive thermal comfort model

8879 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning

Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher

Abstract:

Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.

Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping

Procedia PDF Downloads 129
8878 Investment and Economic Growth: An Empirical Analysis for Tanzania

Authors: Manamba Epaphra

Abstract:

This paper analyzes the causal effect between domestic private investment, public investment, foreign direct investment and economic growth in Tanzania during the 1970-2014 period. The modified neo-classical growth model that includes control variables such as trade liberalization, life expectancy and macroeconomic stability proxied by inflation is used to estimate the impact of investment on economic growth. Also, the economic growth models based on Phetsavong and Ichihashi (2012), and Le and Suruga (2005) are used to estimate the crowding out effect of public investment on private domestic investment on one hand and foreign direct investment on the other hand. A correlation test is applied to check the correlation among independent variables, and the results show that there is very low correlation suggesting that multicollinearity is not a serious problem. Moreover, the diagnostic tests including RESET regression errors specification test, Breusch-Godfrey serial correlation LM test, Jacque-Bera-normality test and white heteroskedasticity test reveal that the model has no signs of misspecification and that, the residuals are serially uncorrelated, normally distributed and homoskedastic. Generally, the empirical results show that the domestic private investment plays an important role in economic growth in Tanzania. FDI also tends to affect growth positively, while control variables such as high population growth and inflation appear to harm economic growth. Results also reveal that control variables such as trade openness and life expectancy improvement tend to increase real GDP growth. Moreover, a revealed negative, albeit weak, association between public and private investment suggests that the positive effect of domestic private investment on economic growth reduces when public investment-to-GDP ratio exceeds 8-10 percent. Thus, there is a great need for promoting domestic saving so as to encourage domestic investment for economic growth.

Keywords: FDI, public investment, domestic private investment, crowding out effect, economic growth

Procedia PDF Downloads 282
8877 Transient Simulation Using SPACE for ATLAS Facility to Investigate the Effect of Heat Loss on Major Parameters

Authors: Suhib A. Abu-Seini, Kyung-Doo Kim

Abstract:

A heat loss model for ATLAS facility was introduced using SPACE code predefined correlations and various dialing factors. As all previous simulations were carried out using a heat loss free input; the facility was considered to be completely insulated and the core power was reduced by the experimentally measured values of heat loss to compensate to the account for the loss of heat, this study will consider heat loss throughout the simulation. The new heat loss model will be affecting SPACE code simulation as heat being leaked out of the system throughout a transient will alter many parameters corresponding to temperature and temperature difference. For that, a Station Blackout followed by a multiple Steam Generator Tube Rupture accident will be simulated using both the insulated system approach and the newly introduced heat loss input of the steady state. Major parameters such as system temperatures, pressure values, and flow rates to be put into comparison and various analysis will be suggested upon it as the experimental values will not be the reference to validate the expected outcome. This study will not only show the significance of heat loss consideration in the processes of prevention and mitigation of various incidents, design basis and beyond accidents as it will give a detailed behavior of ATLAS facility during both processes of steady state and major transient, but will also present a verification of how credible the data acquired of ATLAS are; since heat loss values for steady state were already mismatched between SPACE simulation results and ATLAS data acquiring system. Acknowledgement- This work was supported by the Korean institute of Energy Technology Evaluation and Planning (KETEP) and the Ministry of Trade, Industry & Energy (MOTIE) of the Republic of Korea.

Keywords: ATLAS, heat loss, simulation, SPACE, station blackout, steam generator tube rupture, verification

Procedia PDF Downloads 219
8876 Characterization of Ethanol-Air Combustion in a Constant Volume Combustion Bomb Under Cellularity Conditions

Authors: M. Reyes, R. Sastre, P. Gabana, F. V. Tinaut

Abstract:

In this work, an optical characterization of the ethanol-air laminar combustion is presented in order to investigate the origin of the instabilities developed during the combustion, the onset of the cellular structure and the laminar burning velocity. Experimental tests of ethanol-air have been developed in an optical cylindrical constant volume combustion bomb equipped with a Schlieren technique to record the flame development and the flame front surface wrinkling. With this procedure, it is possible to obtain the flame radius and characterize the time when the instabilities are visible through the cell's apparition and the cellular structure development. Ethanol is an aliphatic alcohol with interesting characteristics to be used as a fuel in Internal Combustion Engines and can be biologically synthesized from biomass. Laminar burning velocity is an important parameter used in simulations to obtain the turbulent flame speed, whereas the flame front structure and the instabilities developed during the combustion are important to understand the transition to turbulent combustion and characterize the increment in the flame propagation speed in premixed flames. The cellular structure is spontaneously generated by volume forces, diffusional-thermal and hydrodynamic instabilities. Many authors have studied the combustion of ethanol air and mixtures of ethanol with other fuels. However, there is a lack of works that investigate the instabilities and the development of a cellular structure in ethanol flames, a few works as characterized the ethanol-air combustion instabilities in spherical flames. In the present work, a parametrical study is made by varying the fuel/air equivalence ratio (0.8-1.4), initial pressure (0.15-0.3 MPa) and initial temperature (343-373K), using a design of experiments type I-optimal. In reach mixtures, it is possible to distinguish the cellular structure formed by the hydrodynamic effect and by from the thermo-diffusive. Results show that ethanol-air flames tend to stabilize as the equivalence ratio decreases in lean mixtures and develop a cellular structure with the increment of initial pressure and temperature.

Keywords: ethanol, instabilities, premixed combustion, schlieren technique, cellularity

Procedia PDF Downloads 62
8875 Integrating Experiential Real-World Learning in Undergraduate Degrees: Maximizing Benefits and Overcoming Challenges

Authors: Anne E. Goodenough

Abstract:

One of the most important roles of higher education professionals is to ensure that graduates have excellent employment prospects. This means providing students with the skills necessary to be immediately effective in the workplace. Increasingly, universities are seeking to achieve this by moving from lecture-based and campus-delivered curricula to more varied delivery, which takes students out of their academic comfort zone and allows them to engage with, and be challenged by, real world issues. One popular approach is integration of problem-based learning (PBL) projects into curricula. However, although the potential benefits of PBL are considerable, it can be difficult to devise projects that are meaningful, such that they can be regarded as mere ‘hoop jumping’ exercises. This study examines three-way partnerships between academics, students, and external link organizations. It studied the experiences of all partners involved in different collaborative projects to identify how benefits can be maximized and challenges overcome. Focal collaborations included: (1) development of real-world modules with novel assessment whereby the organization became the ‘client’ for student consultancy work; (2) frameworks where students collected/analyzed data for link organizations in research methods modules; (3) placement-based internships and dissertations; (4) immersive fieldwork projects in novel locations; and (5) students working as partners on staff-led research with link organizations. Focus groups, questionnaires and semi-structured interviews were used to identify opportunities and barriers, while quantitative analysis of students’ grades was used to determine academic effectiveness. Common challenges identified by academics were finding suitable link organizations and devising projects that simultaneously provided education opportunities and tangible benefits. There was no ‘one size fits all’ formula for success, but careful planning and ensuring clarity of roles/responsibilities were vital. Students were very positive about collaboration projects. They identified benefits to confidence, time-keeping and communication, as well as conveying their enthusiasm when their work was of benefit to the wider community. They frequently highlighted employability opportunities that collaborative projects opened up and analysis of grades demonstrated the potential for such projects to increase attainment. Organizations generally recognized the value of project outputs, but often required considerable assistance to put the right scaffolding in place to ensure projects worked. Benefits were maximized by ensuring projects were well-designed, innovative, and challenging. Co-publication of projects in peer-reviewed journals sometimes gave additional benefits for all involved, being especially beneficial for student curriculum vitae. PBL and student projects are by no means new pedagogic approaches: the novelty here came from creating meaningful three-way partnerships between academics, students, and link organizations at all undergraduate levels. Such collaborations can allow students to make a genuine contribution to knowledge, answer real questions, solve actual problems, all while providing tangible benefits to organizations. Because projects are actually needed, students tend to engage with learning at a deep level. This enhances student experience, increases attainment, encourages development of subject-specific and transferable skills, and promotes networking opportunities. Such projects frequently rely upon students and staff working collaboratively, thereby also acting to break down the traditional teacher/learner division that is typically unhelpful in developing students as advanced learners.

Keywords: higher education, employability, link organizations, innovative teaching and learning methods, interactions between enterprise and education, student experience

Procedia PDF Downloads 179
8874 Study of Error Analysis and Sources of Uncertainty in the Measurement of Residual Stresses by the X-Ray Diffraction

Authors: E. T. Carvalho Filho, J. T. N. Medeiros, L. G. Martinez

Abstract:

Residual stresses are self equilibrating in a rigid body that acts on the microstructure of the material without application of an external load. They are elastic stresses and can be induced by mechanical, thermal and chemical processes causing a deformation gradient in the crystal lattice favoring premature failure in mechanicals components. The search for measurements with good reliability has been of great importance for the manufacturing industries. Several methods are able to quantify these stresses according to physical principles and the response of the mechanical behavior of the material. The diffraction X-ray technique is one of the most sensitive techniques for small variations of the crystalline lattice since the X-ray beam interacts with the interplanar distance. Being very sensitive technique is also susceptible to variations in measurements requiring a study of the factors that influence the final result of the measurement. Instrumental, operational factors, form deviations of the samples and geometry of analyzes are some variables that need to be considered and analyzed in order for the true measurement. The aim of this work is to analyze the sources of errors inherent to the residual stress measurement process by X-ray diffraction technique making an interlaboratory comparison to verify the reproducibility of the measurements. In this work, two specimens were machined, differing from each other by the surface finishing: grinding and polishing. Additionally, iron powder with particle size less than 45 µm was selected in order to be a reference (as recommended by ASTM E915 standard) for the tests. To verify the deviations caused by the equipment, those specimens were positioned and with the same analysis condition, seven measurements were carried out at 11Ψ tilts. To verify sample positioning errors, seven measurements were performed by positioning the sample at each measurement. To check geometry errors, measurements were repeated for the geometry and Bragg Brentano parallel beams. In order to verify the reproducibility of the method, the measurements were performed in two different laboratories and equipments. The results were statistically worked out and the quantification of the errors.

Keywords: residual stress, x-ray diffraction, repeatability, reproducibility, error analysis

Procedia PDF Downloads 175
8873 Building Information Management Advantages, Adaptation, and Challenges of Implementation in Kabul Metropolitan Area

Authors: Mohammad Rahim Rahimi, Yuji Hoshino

Abstract:

Building Information Management (BIM) at recent years has widespread consideration on the Architecture, Engineering and Construction (AEC). BIM has been bringing innovation in AEC industry and has the ability to improve the construction industry with high quality, reduction time and budget of project. Meanwhile, BIM support model and process in AEC industry, the process include the project time cycle, estimating, delivery and generally the way of management of project but not limited to those. This research carried the BIM advantages, adaptation and challenges of implementation in Kabul region. Capital Region Independence Development Authority (CRIDA) have responsibilities to implement the development projects in Kabul region. The method of study were considers on advantages and reasons of BIM performance in Afghanistan based on online survey and data. Besides that, five projects were studied, the reason of consideration were many times design revises and changes. Although, most of the projects had problems regard to designing and implementation stage, hence in canal project was discussed in detail with the main reason of problems. Which were many time changes and revises due to the lack of information, planning, and management. In addition, two projects based on BIM utilization in Japan were also discussed. The Shinsuizenji Station and Oita River dam projects. Those are implemented and implementing consequently according to the BIM requirements. The investigation focused on BIM usage, project implementation process. Eventually, the projects were the comparison with CRIDA and BIM utilization in Japan. The comparison will focus on the using of the model and the way of solving the problems based upon on the BIM. In conclusion, that BIM had the capacity to prevent many times design changes and revises. On behalf of achieving those objectives are required to focus on data management and sharing, BIM training and using new technology.

Keywords: construction information management, implementation and adaptation of BIM, project management, developing countries

Procedia PDF Downloads 125
8872 Nanoparticles Activated Inflammasome Lead to Airway Hyperresponsiveness and Inflammation in a Mouse Model of Asthma

Authors: Pureun-Haneul Lee, Byeong-Gon Kim, Sun-Hye Lee, An-Soo Jang

Abstract:

Background: Nanoparticles may pose adverse health effects due to particulate matter inhalation. Nanoparticle exposure induces cell and tissue damage, causing local and systemic inflammatory responses. The inflammasome is a major regulator of inflammation through its activation of pro-caspase-1, which cleaves pro-interleukin-1β (IL-1β) into its mature form and may signal acute and chronic immune responses to nanoparticles. Objective: The aim of the study was to identify whether nanoparticles exaggerates inflammasome pathway leading to airway inflammation and hyperresponsiveness in an allergic mice model of asthma. Methods: Mice were treated with saline (sham), OVA-sensitized and challenged (OVA), or titanium dioxide nanoparticles. Lung interleukin 1 beta (IL-1β), interleukin 18 (IL-18), NACHT, LRR and PYD domains-containing protein 3 (NLRP3) and caspase-1 levels were assessed with Western Blot. Caspase-1 was checked by immunohistochemical staining. Reactive oxygen species were measured for the marker 8-isoprostane and carbonyl by ELISA. Results: Airway inflammation and hyperresponsiveness increased in OVA-sensitized/challenged mice and these responses were exaggerated by TiO2 nanoparticles exposure. TiO2 nanoparticles treatment increased IL-1β and IL-18 protein expression in OVA-sensitized/challenged mice. TiO2 nanoparticles augmented the expression of NLRP3 and caspase-1 leading to the formation of an active caspase-1 in the lung. Lung caspase-1 expression was increased in OVA-sensitized/challenged mice and these responses were exaggerated by TiO2 nanoparticles exposure. Reactive oxygen species was increased in OVA-sensitized/challenged mice and in OVA-sensitized/challenged plus TiO2 exposed mice. Conclusion: Our data demonstrate that inflammasome pathway activates in asthmatic lungs following nanoparticles exposure, suggesting that targeting the inflammasome may help control nanoparticles-induced airway inflammation and responsiveness.

Keywords: bronchial asthma, inflammation, inflammasome, nanoparticles

Procedia PDF Downloads 370
8871 The Feasibility of Glycerol Steam Reforming in an Industrial Sized Fixed Bed Reactor Using Computational Fluid Dynamic (CFD) Simulations

Authors: Mahendra Singh, Narasimhareddy Ravuru

Abstract:

For the past decade, the production of biodiesel has significantly increased along with its by-product, glycerol. Biodiesel-derived glycerol massive entry into the glycerol market has caused its value to plummet. Newer ways to utilize the glycerol by-product must be implemented or the biodiesel industry will face serious economic problems. The biodiesel industry should consider steam reforming glycerol to produce hydrogen gas. Steam reforming is the most efficient way of producing hydrogen and there is a lot of demand for it in the petroleum and chemical industries. This study investigates the feasibility of glycerol steam reforming in an industrial sized fixed bed reactor. In this paper, using computational fluid dynamic (CFD) simulations, the extent of the transport resistances that would occur in an industrial sized reactor can be visualized. An important parameter in reactor design is the size of the catalyst particle. The size of the catalyst cannot be too large where transport resistances are too high, but also not too small where an extraordinary amount of pressure drop occurs. The goal of this paper is to find the best catalyst size under various flow rates that will result in the highest conversion. Computational fluid dynamics simulated the transport resistances and a pseudo-homogenous reactor model was used to evaluate the pressure drop and conversion. CFD simulations showed that glycerol steam reforming has strong internal diffusion resistances resulting in extremely low effectiveness factors. In the pseudo-homogenous reactor model, the highest conversion obtained with a Reynolds number of 100 (29.5 kg/h) was 9.14% using a 1/6 inch catalyst diameter. Due to the low effectiveness factors and high carbon deposition rates, a fluidized bed is recommended as the appropriate reactor to carry out glycerol steam reforming.

Keywords: computational fluid dynamic, fixed bed reactor, glycerol, steam reforming, biodiesel

Procedia PDF Downloads 300
8870 Human Beta Defensin 1 as Potential Antimycobacterial Agent against Active and Dormant Tubercle Bacilli

Authors: Richa Sharma, Uma Nahar, Sadhna Sharma, Indu Verma

Abstract:

Counteracting the deadly pathogen Mycobacterium tuberculosis (M. tb) effectively is still a global challenge. Scrutinizing alternative weapons like antimicrobial peptides to strengthen existing tuberculosis artillery is urgently required. Considering the antimycobacterial potential of Human Beta Defensin 1 (HBD-1) along with isoniazid, the present study was designed to explore the ability of HBD-1 to act against active and dormant M. tb. HBD-1 was screened in silico using antimicrobial peptide prediction servers to identify its short antimicrobial motif. The activity of both HBD-1 and its selected motif (Pep B) was determined at different concentrations against actively growing M. tb in vitro and ex vivo in monocyte derived macrophages (MDMs). Log phase M. tb was grown along with HBD-1 and Pep B for 7 days. M. tb infected MDMs were treated with HBD-1 and Pep B for 72 hours. Thereafter, colony forming unit (CFU) enumeration was performed to determine activity of both peptides against actively growing in vitro and intracellular M. tb. The dormant M. tb models were prepared by following two approaches and treated with different concentrations of HBD-1 and Pep B. Firstly, 20-22 days old M. tbH37Rv was grown in potassium deficient Sauton media for 35 days. The presence of dormant bacilli was confirmed by Nile red staining. Dormant bacilli were further treated with rifampicin, isoniazid, HBD-1 and its motif for 7 days. The effect of both peptides on latent bacilli was assessed by colony forming units (CFU) and most probable number (MPN) enumeration. Secondly, human PBMC granuloma model was prepared by infecting PBMCs seeded on collagen matrix with M. tb(MOI 0.1) for 10 days. Histopathology was done to confirm granuloma formation. The granuloma thus formed was incubated for 72 hours with rifampicin, HBD-1 and Pep B individually. Difference in bacillary load was determined by CFU enumeration. The minimum inhibitory concentrations of HBD-1 and Pep B restricting growth of mycobacteria in vitro were 2μg/ml and 20μg/ml respectively. The intracellular mycobacterial load was reduced significantly by HBD-1 and Pep B at 1μg/ml and 5μg/ml respectively. Nile red positive bacterial population, high MPN/ low CFU count and tolerance to isoniazid, confirmed the formation of potassium deficienybaseddormancy model. HBD-1 (8μg/ml) showed 96% and 99% killing and Pep B (40μg/ml) lowered dormant bacillary load by 68.89% and 92.49% based on CFU and MPN enumeration respectively. Further, H&E stained aggregates of macrophages and lymphocytes, acid fast bacilli surrounded by cellular aggregates and rifampicin resistance, indicated the formation of human granuloma dormancy model. HBD-1 (8μg/ml) led to 81.3% reduction in CFU whereas its motif Pep B (40μg/ml) showed only 54.66% decrease in bacterial load inside granuloma. Thus, the present study indicated that HBD-1 and its motif are effective antimicrobial players against both actively growing and dormant M. tb. They should be further explored to tap their potential to design a powerful weapon for combating tuberculosis.

Keywords: antimicrobial peptides, dormant, human beta defensin 1, tuberculosis

Procedia PDF Downloads 261
8869 Material Concepts and Processing Methods for Electrical Insulation

Authors: R. Sekula

Abstract:

Epoxy composites are broadly used as an electrical insulation for the high voltage applications since only such materials can fulfill particular mechanical, thermal, and dielectric requirements. However, properties of the final product are strongly dependent on proper manufacturing process with minimized material failures, as too large shrinkage, voids and cracks. Therefore, application of proper materials (epoxy, hardener, and filler) and process parameters (mold temperature, filling time, filling velocity, initial temperature of internal parts, gelation time), as well as design and geometric parameters are essential features for final quality of the produced components. In this paper, an approach for three-dimensional modeling of all molding stages, namely filling, curing and post-curing is presented. The reactive molding simulation tool is based on a commercial CFD package, and include dedicated models describing viscosity and reaction kinetics that have been successfully implemented to simulate the reactive nature of the system with exothermic effect. Also a dedicated simulation procedure for stress and shrinkage calculations, as well as simulation results are presented in the paper. Second part of the paper is dedicated to recent developments on formulations of functional composites for electrical insulation applications, focusing on thermally conductive materials. Concepts based on filler modifications for epoxy electrical composites have been presented, including the results of the obtained properties. Finally, having in mind tough environmental regulations, in addition to current process and design aspects, an approach for product re-design has been presented focusing on replacement of epoxy material with the thermoplastic one. Such “design-for-recycling” method is one of new directions associated with development of new material and processing concepts of electrical products and brings a lot of additional research challenges. For that, one of the successful products has been presented to illustrate the presented methodology.

Keywords: curing, epoxy insulation, numerical simulations, recycling

Procedia PDF Downloads 272
8868 Predicting Long-Term Performance of Concrete under Sulfate Attack

Authors: Elakneswaran Yogarajah, Toyoharu Nawa, Eiji Owaki

Abstract:

Cement-based materials have been using in various reinforced concrete structural components as well as in nuclear waste repositories. The sulfate attack has been an environmental issue for cement-based materials exposed to sulfate bearing groundwater or soils, and it plays an important role in the durability of concrete structures. The reaction between penetrating sulfate ions and cement hydrates can result in swelling, spalling and cracking of cement matrix in concrete. These processes induce a reduction of mechanical properties and a decrease of service life of an affected structure. It has been identified that the precipitation of secondary sulfate bearing phases such as ettringite, gypsum, and thaumasite can cause the damage. Furthermore, crystallization of soluble salts such as sodium sulfate crystals induces degradation due to formation and phase changes. Crystallization of mirabilite (Na₂SO₄:10H₂O) and thenardite (Na₂SO₄) or their phase changes (mirabilite to thenardite or vice versa) due to temperature or sodium sulfate concentration do not involve any chemical interaction with cement hydrates. Over the past couple of decades, an intensive work has been carried out on sulfate attack in cement-based materials. However, there are several uncertainties still exist regarding the mechanism for the damage of concrete in sulfate environments. In this study, modelling work has been conducted to investigate the chemical degradation of cementitious materials in various sulfate environments. Both internal and external sulfate attack are considered for the simulation. In the internal sulfate attack, hydrate assemblage and pore solution chemistry of co-hydrating Portland cement (PC) and slag mixing with sodium sulfate solution are calculated to determine the degradation of the PC and slag-blended cementitious materials. Pitzer interactions coefficients were used to calculate the activity coefficients of solution chemistry at high ionic strength. The deterioration mechanism of co-hydrating cementitious materials with 25% of Na₂SO₄ by weight is the formation of mirabilite crystals and ettringite. Their formation strongly depends on sodium sulfate concentration and temperature. For the external sulfate attack, the deterioration of various types of cementitious materials under external sulfate ingress is simulated through reactive transport model. The reactive transport model is verified with experimental data in terms of phase assemblage of various cementitious materials with spatial distribution for different sulfate solution. Finally, the reactive transport model is used to predict the long-term performance of cementitious materials exposed to 10% of Na₂SO₄ for 1000 years. The dissolution of cement hydrates and secondary formation of sulfate-bearing products mainly ettringite are the dominant degradation mechanisms, but not the sodium sulfate crystallization.

Keywords: thermodynamic calculations, reactive transport, radioactive waste disposal, PHREEQC

Procedia PDF Downloads 158
8867 Analysis of Two-Echelon Supply Chain with Perishable Items under Stochastic Demand

Authors: Saeed Poormoaied

Abstract:

Perishability and developing an intelligent control policy for perishable items are the major concerns of marketing managers in a supply chain. In this study, we address a two-echelon supply chain problem for perishable items with a single vendor and a single buyer. The buyer adopts an aged-based continuous review policy which works by taking both the stock level and the aging process of items into account. The vendor works under the warehouse framework, where its lot size is determined with respect to the batch size of the buyer. The model holds for a positive and fixed lead time for the buyer, and zero lead time for the vendor. The demand follows a Poisson process and any unmet demand is lost. We provide exact analytic expressions for the operational characteristics of the system by using the renewal reward theorem. Items have a fixed lifetime after which they become unusable and are disposed of from the buyer's system. The age of items starts when they are unpacked and ready for the consumption at the buyer. When items are held by the vendor, there is no aging process which results in no perishing at the vendor's site. The model is developed under the centralized framework, which takes the expected profit of both vendor and buyer into consideration. The goal is to determine the optimal policy parameters under the service level constraint at the retailer's site. A sensitivity analysis is performed to investigate the effect of the key input parameters on the expected profit and order quantity in the supply chain. The efficiency of the proposed age-based policy is also evaluated through a numerical study. Our results show that when the unit perishing cost is negligible, a significant cost saving is achieved.

Keywords: two-echelon supply chain, perishable items, age-based policy, renewal reward theorem

Procedia PDF Downloads 139
8866 A Study on the Correlation Analysis between the Pre-Sale Competition Rate and the Apartment Unit Plan Factor through Machine Learning

Authors: Seongjun Kim, Jinwooung Kim, Sung-Ah Kim

Abstract:

The development of information and communication technology also affects human cognition and thinking, especially in the field of design, new techniques are being tried. In architecture, new design methodologies such as machine learning or data-driven design are being applied. In particular, these methodologies are used in analyzing the factors related to the value of real estate or analyzing the feasibility in the early planning stage of the apartment housing. However, since the value of apartment buildings is often determined by external factors such as location and traffic conditions, rather than the interior elements of buildings, data is rarely used in the design process. Therefore, although the technical conditions are provided, the internal elements of the apartment are difficult to apply the data-driven design in the design process of the apartment. As a result, the designers of apartment housing were forced to rely on designer experience or modular design alternatives rather than data-driven design at the design stage, resulting in a uniform arrangement of space in the apartment house. The purpose of this study is to propose a methodology to support the designers to design the apartment unit plan with high consumer preference by deriving the correlation and importance of the floor plan elements of the apartment preferred by the consumers through the machine learning and reflecting this information from the early design process. The data on the pre-sale competition rate and the elements of the floor plan are collected as data, and the correlation between pre-sale competition rate and independent variables is analyzed through machine learning. This analytical model can be used to review the apartment unit plan produced by the designer and to assist the designer. Therefore, it is possible to make a floor plan of apartment housing with high preference because it is possible to feedback apartment unit plan by using trained model when it is used in floor plan design of apartment housing.

Keywords: apartment unit plan, data-driven design, design methodology, machine learning

Procedia PDF Downloads 262
8865 Adoption of Climate-Smart Agriculture Practices Among Farmers and Its Effect on Crop Revenue in Ethiopia

Authors: Fikiru Temesgen Gelata

Abstract:

Food security, adaptation, and climate change mitigation are all problems that can be resolved simultaneously with Climate-Smart Agriculture (CSA). This study examines determinants of climate-smart agriculture (CSA) practices among smallholder farmers, aiming to understand the factors guiding adoption decisions and evaluate the impact of CSA on smallholder farmer income in the study areas. For this study, three-stage sampling techniques were applied to select 230 smallholders randomly. Mann-Kendal test and multinomial endogenous switching regression model were used to analyze trends of decrease or increase within long-term temporal data and the impact of CSA on the smallholder farmer income, respectively. Findings revealed education level, household size, land ownership, off-farm income, climate information, and contact with extension agents found to be highly adopted CSA practices. On the contrary, erosion exerted a detrimental impact on all the agricultural practices examined within the study region. Various factors such as farming methods, the size of farms, proximity to irrigated farmlands, availability of extension services, distance to market hubs, and access to weather forecasts were recognized as key determinants influencing the adoption of CSA practices. The multinomial endogenous switching regression model (MESR) revealed that joint adoption of crop rotation and soil and water conservation practices significantly increased farm income by 1,107,245 ETB. The study recommends that counties and governments should prioritize addressing climate change in their development agendas to increase the adoption of climate-smart farming techniques.

Keywords: climate-smart practices, food security, Oincome, MERM, Ethiopia

Procedia PDF Downloads 20
8864 Evaluating the Capability of the Flux-Limiter Schemes in Capturing the Turbulence Structures in a Fully Developed Channel Flow

Authors: Mohamed Elghorab, Vendra C. Madhav Rao, Jennifer X. Wen

Abstract:

Turbulence modelling is still evolving, and efforts are on to improve and develop numerical methods to simulate the real turbulence structures by using the empirical and experimental information. The monotonically integrated large eddy simulation (MILES) is an attractive approach for modelling turbulence in high Re flows, which is based on the solving of the unfiltered flow equations with no explicit sub-grid scale (SGS) model. In the current work, this approach has been used, and the action of the SGS model has been included implicitly by intrinsic nonlinear high-frequency filters built into the convection discretization schemes. The MILES solver is developed using the opensource CFD OpenFOAM libraries. The role of flux limiters schemes namely, Gamma, superBee, van-Albada and van-Leer, is studied in predicting turbulent statistical quantities for a fully developed channel flow with a friction Reynolds number, ReT = 180, and compared the numerical predictions with the well-established Direct Numerical Simulation (DNS) results for studying the wall generated turbulence. It is inferred from the numerical predictions that Gamma, van-Leer and van-Albada limiters produced more diffusion and overpredicted the velocity profiles, while superBee scheme reproduced velocity profiles and turbulence statistical quantities in good agreement with the reference DNS data in the streamwise direction although it deviated slightly in the spanwise and normal to the wall directions. The simulation results are further discussed in terms of the turbulence intensities and Reynolds stresses averaged in time and space to draw conclusion on the flux limiter schemes performance in OpenFOAM context.

Keywords: flux limiters, implicit SGS, MILES, OpenFOAM, turbulence statistics

Procedia PDF Downloads 181
8863 Numerical Prediction of Width Crack of Concrete Dapped-End Beams

Authors: Jatziri Y. Moreno-Martinez, Arturo Galvan, Xavier Chavez Cardenas, Hiram Arroyo

Abstract:

Several methods have been utilized to study the prediction of cracking of concrete structural under loading. The finite element analysis is an alternative that shows good results. The aim of this work was the numerical study of the width crack in reinforced concrete beams with dapped ends, these are frequently found in bridge girders and precast concrete construction. Properly restricting cracking is an important aspect of the design in dapped ends, it has been observed that the cracks that exceed the allowable widths are unacceptable in an aggressive environment for reinforcing steel. For simulating the crack width, the discrete crack approach was considered by means of a Cohesive Zone (CZM) Model using a function to represent the crack opening. Two cases of dapped-end were constructed and tested in the laboratory of Structures and Materials of Engineering Institute of UNAM. The first case considers a reinforcement based on hangers as well as on vertical and horizontal ring, the second case considers 50% of the vertical stirrups in the dapped end to the main part of the beam were replaced by an equivalent area (vertically projected) of diagonal bars under. The loading protocol consisted on applying symmetrical loading to reach the service load. The models were performed using the software package ANSYS v. 16.2. The concrete structure was modeled using three-dimensional solid elements SOLID65 capable of cracking in tension and crushing in compression. Drucker-Prager yield surface was used to include the plastic deformations. The reinforcement was introduced with smeared approach. Interface delamination was modeled by traditional fracture mechanics methods such as the nodal release technique adopting softening relationships between tractions and the separations, which in turn introduce a critical fracture energy that is also the energy required to break apart the interface surfaces. This technique is called CZM. The interface surfaces of the materials are represented by a contact elements Surface-to-Surface (CONTA173) with bonded (initial contact). The Mode I dominated bilinear CZM model assumes that the separation of the material interface is dominated by the displacement jump normal to the interface. Furthermore, the opening crack was taken into consideration according to the maximum normal contact stress, the contact gap at the completion of debonding, and the maximum equivalent tangential contact stress. The contact elements were placed in the crack re-entrant corner. To validate the proposed approach, the results obtained with the previous procedure are compared with experimental test. A good correlation between the experimental and numerical Load-Displacement curves was presented, the numerical models also allowed to obtain the load-crack width curves. In these two cases, the proposed model confirms the capability of predicting the maximum crack width, with an error of ± 30 %. Finally, the orientation of the crack is a fundamental for the prediction of crack width. The results regarding the crack width can be considered as good from the practical point view. Load-Displacement curve of the test and the location of the crack were able to obtain favorable results.

Keywords: cohesive zone model, dapped-end beams, discrete crack approach, finite element analysis

Procedia PDF Downloads 163
8862 How Message Framing and Temporal Distance Affect Word of Mouth

Authors: Camille Lacan, Pierre Desmet

Abstract:

In the crowdfunding model, a campaign succeeds by collecting the funds required over a predefined duration. The success of a CF campaign depends both on the capacity to attract members of the online communities concerned, and on the community members’ involvement in online word-of-mouth recommendations. To maximize the campaign's success probability, project creators (i.e., an organization appealing for financial resources) send messages to contributors to ask them to issue word of mouth. Internet users relay information about projects through Word of Mouth which is defined as “a critical tool for facilitating information diffusion throughout online communities”. The effectiveness of these messages depends on the message framing and the time at which they are sent to contributors (i.e., at the start of the campaign or close to the deadline). This article addresses the following question: What are the effect of message framing and temporal distance on the willingness to share word of mouth? Drawing on Perspectives Theory and Construal Level Theory, this study examines the interplay between message framing (Gains vs. Losses) and temporal distance (message while the deadline is coming vs. far) on intention to share word of mouth. A between-subject experimental design is conducted to test the research model. Results show significant differences between a loss-framed message (lack of benefits if the campaign fails) associated with a short deadline (ending tomorrow) compared to a gain-framed message (benefits if the campaign succeeds) associated with a distant deadline (ending in three months). However, this effect is moderated by the anticipated regret of a campaign failure and the temporal orientation. These moderating effects contribute to specifying the boundary condition of the framing effect. Handling the message framing and the temporal distance are thus the key decisions to influence the willingness to share word of mouth.

Keywords: construal levels, crowdfunding, message framing, word of mouth

Procedia PDF Downloads 248
8861 Lignin Phenol Formaldehyde Resole Resin: Synthesis and Characteristics

Authors: Masoumeh Ghorbania, Falk Liebnerb, Hendrikus W.G. van Herwijnenc, Johannes Konnertha

Abstract:

Phenol formaldehyde (PF) resins are widely used as wood adhesives for variety of industrial products such as plywood, laminated veneer lumber and others. Lignin as a main constituent of wood has become well-known as a potential substitute for phenol in PF adhesives because of their structural similarity. During the last decades numerous research approaches have been carried out to substitute phenol with pulping-derived lignin, whereby the lower reactivity of resins synthesized with shares of lignin seem to be one of the major challenges. This work reports about a systematic screening of different types of lignin (plant origin and pulping process) for their suitability to replace phenol in phenolic resins. Lignin from different plant sources (softwood, hardwood and grass) were used, as these should differ significantly in their reactivity towards formaldehyde of their reactive phenolic core units. Additionally a possible influence of the pulping process was addressed by using the different types of lignin from soda, kraft, and organosolv process and various lignosulfonates (sodium, ammonium, calcium, magnesium). To determine the influence of lignin on the adhesive performance beside others the rate of viscosity development, bond strength development of varying hot pressing time and other thermal properties were investigated. To evaluate the performance of the cured end product, a few selected properties were studied at the example of solid wood-adhesive bond joints, compact panels and plywood. As main results it was found that lignin significantly accelerates the viscosity development in adhesive synthesis. Bonding strength development during curing of adhesives decelerated for all lignin types, while this trend was least for pine kraft lignin and spruce sodium lignosulfonate. However, the overall performance of the products prepared with the latter adhesives was able to fulfill main standard requirements, even after exposing the products to harsh environmental conditions. Thus, a potential application can be considered for processes where reactivity is less critical but adhesive cost and product performance is essential.

Keywords: phenol formaldehyde resin, lignin phenol formaldehyde resin, ABES, DSC

Procedia PDF Downloads 233
8860 Phenomena-Based Approach for Automated Generation of Process Options and Process Models

Authors: Parminder Kaur Heer, Alexei Lapkin

Abstract:

Due to global challenges of increased competition and demand for more sustainable products/processes, there is a rising pressure on the industry to develop innovative processes. Through Process Intensification (PI) the existing and new processes may be able to attain higher efficiency. However, very few PI options are generally considered. This is because processes are typically analysed at a unit operation level, thus limiting the search space for potential process options. PI performed at more detailed levels of a process can increase the size of the search space. The different levels at which PI can be achieved is unit operations, functional and phenomena level. Physical/chemical phenomena form the lowest level of aggregation and thus, are expected to give the highest impact because all the intensification options can be described by their enhancement. The objective of the current work is thus, generation of numerous process alternatives based on phenomena, and development of their corresponding computer aided models. The methodology comprises: a) automated generation of process options, and b) automated generation of process models. The process under investigation is disintegrated into functions viz. reaction, separation etc., and these functions are further broken down into the phenomena required to perform them. E.g., separation may be performed via vapour-liquid or liquid-liquid equilibrium. A list of phenomena for the process is formed and new phenomena, which can overcome the difficulties/drawbacks of the current process or can enhance the effectiveness of the process, are added to the list. For instance, catalyst separation issue can be handled by using solid catalysts; the corresponding phenomena are identified and added. The phenomena are then combined to generate all possible combinations. However, not all combinations make sense and, hence, screening is carried out to discard the combinations that are meaningless. For example, phase change phenomena need the co-presence of the energy transfer phenomena. Feasible combinations of phenomena are then assigned to the functions they execute. A combination may accomplish a single or multiple functions, i.e. it might perform reaction or reaction with separation. The combinations are then allotted to the functions needed for the process. This creates a series of options for carrying out each function. Combination of these options for different functions in the process leads to the generation of superstructure of process options. These process options, which are formed by a list of phenomena for each function, are passed to the model generation algorithm in the form of binaries (1, 0). The algorithm gathers the active phenomena and couples them to generate the model. A series of models is generated for the functions, which are combined to get the process model. The most promising process options are then chosen subjected to a performance criterion, for example purity of product, or via a multi-objective Pareto optimisation. The methodology was applied to a two-step process and the best route was determined based on the higher product yield. The current methodology can identify, produce and evaluate process intensification options from which the optimal process can be determined. It can be applied to any chemical/biochemical process because of its generic nature.

Keywords: Phenomena, Process intensification, Process models , Process options

Procedia PDF Downloads 226
8859 Looking beyond Corporate Social Responsibility to Sustainable Development: Conceptualisation and Theoretical Exploration

Authors: Mercy E. Makpor

Abstract:

Traditional Corporate Social Responsibility (CSR) idea has gone beyond just ensuring safety environments, caring about global warming and ensuring good living standards and conditions for the society at large. The paradigm shift is towards a focus on strategic objectives and the long-term value creation for both businesses and the society at large for a realistic future. As an important approach to solving social and environment issues, CSR has been accepted globally. Yet the approach is expected to go beyond where it is currently. So much is expected from businesses and governments at every level globally and locally. This then leads to the original idea of the concept, that is, how it originated and how it has been perceived over the years. Little wonder there has been a lot of definitions surrounding the concept without a major globally acceptable definition of it. The definition of CSR given by the European Commission will be considered for the purpose of this paper. Sustainable Development (SD), on the other hand, has been viewed in recent years as an ethical concept explained in the UN-Report termed “Our Common Future,” which can also be referred to as the Brundtland report. The report summarises the need for SD to take place in the present without comprising the future. However, the recent 21st-century framework on sustainability known as the “Triple Bottom Line (TBL)” framework, has added its voice to the concepts of CSR and sustainable development. The TBL model is of the opinion that businesses should not only report on their financial performance but also on their social and environmental performances, highlighting that CSR has gone beyond just the “material-impact” approach towards a “Future-Oriented” approach (sustainability). In this paper, the concept of CSR is revisited by exploring the various theories therein. The discourse on the concepts of sustainable development and sustainable development frameworks will also be indicated, thereby inducing these into how CSR can benefit both businesses and their stakeholders as well as the entirety of the society, not just for the present but for the future. It does this by exploring the importance of both concepts (CSR and SD) and concludes by making recommendations for a more empirical research in the near future.

Keywords: corporate social responsibility, sustainable development, sustainability, triple bottom line model

Procedia PDF Downloads 242
8858 Copolymers of Epsilon-Caprolactam Received via Anionic Polymerization in the Presence of Polypropylene Glycol Based Polymeric Activators

Authors: Krasimira N. Zhilkova, Mariya K. Kyulavska, Roza P. Mateva

Abstract:

The anionic polymerization of -caprolactam (CL) with bifunctional activators has been extensively studied as an effective and beneficial method of improving chemical and impact resistances, elasticity and other mechanical properties of polyamide (PA6). In presence of activators or macroactivators (MAs) also called polymeric activators (PACs) the anionic polymerization of lactams proceeds rapidly at a temperature range of 130-180C, well below the melting point of PA-6 (220C) permitting thus the direct manufacturing of copolymer product together with desired modifications of polyamide properties. Copolymers of PA6 with an elastic polypropylene glycol (PPG) middle block into main chain were successfully synthesized via activated anionic ring opening polymerization (ROP) of CL. Using novel PACs based on PPG polyols (with differ molecular weight) the anionic ROP of CL was realized and investigated in the presence of a basic initiator sodium salt of CL (NaCL). The PACs were synthesized as N-carbamoyllactam derivatives of hydroxyl terminated PPG functionalized with isophorone diisocyanate [IPh, 5-Isocyanato-1-(isocyanatomethyl)-1,3,3-trimethylcyclohexane] and blocked then with CL units via an addition reaction. The block copolymers were analyzed and proved with 1H-NMR and FT-IR spectroscopy. The influence of the CL/PACs ratio in feed, the length of the PPG segments and polymerization conditions on the kinetics of anionic ROP, on average molecular weight, and on the structure of the obtained block copolymers were investigated. The structure and phase behaviour of the copolymers were explored with differential scanning calorimetry, wide-angle X-ray diffraction, thermogravimetric analysis and dynamic mechanical thermal analysis. The crystallinity dependence of PPG content incorporated into copolymers main backbone was estimate. Additionally, the mechanical properties of the obtained copolymers were studied by notched impact test. From the performed investigation in this study could be concluded that using PPG based PACs at the chosen ROP conditions leads to obtaining well-defined PA6-b-PPG-b-PA6 copolymers with improved impact resistance.

Keywords: anionic ring opening polymerization, caprolactam, polyamide copolymers, polypropylene glycol

Procedia PDF Downloads 407
8857 Assessment of Environmental Quality of an Urban Setting

Authors: Namrata Khatri

Abstract:

The rapid growth of cities is transforming the urban environment and posing significant challenges for environmental quality. This study examines the urban environment of Belagavi in Karnataka, India, using geostatistical methods to assess the spatial pattern and land use distribution of the city and to evaluate the quality of the urban environment. The study is driven by the necessity to assess the environmental impact of urbanisation. Satellite data was utilised to derive information on land use and land cover. The investigation revealed that land use had changed significantly over time, with a drop in plant cover and an increase in built-up areas. High-resolution satellite data was also utilised to map the city's open areas and gardens. GIS-based research was used to assess public green space accessibility and to identify regions with inadequate waste management practises. The findings revealed that garbage collection and disposal techniques in specific areas of the city needed to be improved. Moreover, the study evaluated the city's thermal environment using Landsat 8 land surface temperature (LST) data. The investigation found that built-up regions had higher LST values than green areas, pointing to the city's urban heat island (UHI) impact. The study's conclusions have far-reaching ramifications for urban planners and politicians in Belgaum and other similar cities. The findings may be utilised to create sustainable urban planning strategies that address the environmental effect of urbanisation while also improving the quality of life for city dwellers. Satellite data and high-resolution satellite pictures were gathered for the study, and remote sensing and GIS tools were utilised to process and analyse the data. Ground truthing surveys were also carried out to confirm the accuracy of the remote sensing and GIS-based data. Overall, this study provides a complete assessment of Belgaum's environmental quality and emphasizes the potential of remote sensing and geographic information systems (GIS) approaches in environmental assessment and management.

Keywords: environmental quality, UEQ, remote sensing, GIS

Procedia PDF Downloads 78
8856 Evolution of Relations among Multiple Institutional Logics: A Case Study from a Higher Education Institution

Authors: Ye Jiang

Abstract:

To examine how the relationships among multiple institutional logics vary over time and the factors that may impact this process, we conducted a 15-year in-depth longitudinal case study of a Higher Education Institution to examine its exploration in college student management. By employing constructive grounded theory, we developed a four-stage process model comprising separation, formalization, selective bridging, and embeddedness that showed how two contradictory logics become complementary, and finally become a new hybridized logic. We argue that selective bridging is an important step in changing inter-logic relations. We also found that ambidextrous leadership and situational sensemaking are two key factors that drive this process. Our contribution to the literature is threefold. First, we enhance the literature on the changing relationships among multiple institutional logics and our findings advance the understanding of relationships between multiple logics through a dynamic view. While most studies have tended to assume that the relationship among logics is static and persistently in a contentious state, we contend that the relationships among multiple institutional logics can change over time. Competitive logics can become complementary, and a new hybridized logic can emerge therefrom. The four-stage logic hybridization process model offers insights on the logic hybridization process, which is underexplored in the literature. Second, our research reveals that selective bridging is important in making conflicting logics compatible, and thus constitutes a key step in creating new hybridized logic dynamics. Our findings suggest that the relations between multiple logics are manageable and can thus be manipulated for organizational innovation. Finally, the factors influencing the variations in inter-logic relations enrich the understanding of the antecedents of these dynamics.

Keywords: institutional theory, institutional logics, ambidextrous leadership, situational sensemaking

Procedia PDF Downloads 155
8855 Differences in Vitamin D Status in Caucasian and Asian Women Following Ultraviolet Radiation (UVR) Exposure

Authors: O. Hakim, K. Hart, P. McCabe, J. Berry, L. E. Rhodes, N. Spyrou, A. Alfuraih, S. Lanham-New

Abstract:

It is known that skin pigmentation reduces the penetration of ultraviolet radiation (UVR) and thus photosynthesis of 25(OH)D. However, the ethnic differences in 25(OH)D production remain to be fully elucidated. This study aimed to investigate the differences in vitamin D production between Asian and Caucasian postmenopausal women, in response to a defined, controlled UVB exposure. Seventeen women; nine white Caucasian (skin phototype II and III), eight South Asian women (skin phototype IV and V) participated in the study, acting as their controls. Three blood samples were taken for measurement of 25(OH)D during the run-in period (nine days, no sunbed exposure) after which all subjects underwent an identical UVR exposure protocol irrespective of skin colour (nine days, three sunbed sessions: 6, 8 and 8 minutes respectively with approximately 80% of body surface exposed). Skin tone was measured four times during the study. Both groups showed a gradual increase in 25(OH)D with final levels significantly higher than baseline (p<0.01). 25(OH)D concentration mean from a baseline of 43.58±19.65 to 57.80±17.11 nmol/l among Caucasian and from 27.03±23.92 to 44.73±17.74 nmol/l among Asian women. The baseline status of vitamin D was classified as deficient among the Asian women and insufficient among the Caucasian women. The percentage increase in vitamin D3 among Caucasians was 39.86% (21.02) and 207.78% (286.02) in Asian subjects respectively. This greater response to UVR exposure reflects the lower baseline levels of the Asian subjects. The mixed linear model analysis identified a significant effect of duration of UVR exposure on the production of 25(OH)D. However, the model shows no significant effect of ethnicity and skin tone on the production of 25(OH)D. These novel findings indicate that people of Asian ethnicity have the full capability to produce a similar amount of vitamin D compared to the Caucasian group; initial vitamin D concentration influences the amount of UVB needed to reach equal serum concentrations.

Keywords: ethnicity, Caucasian, South Asian, vitamin D, ultraviolet radiation, UVR

Procedia PDF Downloads 531
8854 Bioavailability of Zinc to Wheat Grown in the Calcareous Soils of Iraqi Kurdistan

Authors: Muhammed Saeed Rasheed

Abstract:

Knowledge of the zinc and phytic acid (PA) concentrations of staple cereal crops are essential when evaluating the nutritional health of national and regional populations. In the present study, a total of 120 farmers’ fields in Iraqi Kurdistan were surveyed for zinc status in soil and wheat grain samples; wheat is the staple carbohydrate source in the region. Soils were analysed for total concentrations of phosphorus (PT) and zinc (ZnT), available P (POlsen) and Zn (ZnDTPA) and for pH. Average values (mg kg-1) ranged between 403-3740 (PT), 42.0-203 (ZnT), 2.13-28.1 (POlsen) and 0.14-5.23 (ZnDTPA); pH was in the range 7.46-8.67. The concentrations of Zn, PA/Zn molar ratio and estimated Zn bioavailability were also determined in wheat grain. The ranges of Zn and PA concentrations (mg kg⁻¹) were 12.3-63.2 and 5400 – 9300, respectively, giving a PA/Zn molar ratio of 15.7-30.6. A trivariate model was used to estimate intake of bioaccessible Zn, employing the following parameter values: (i) maximum Zn absorption = 0.09 (AMAX), (ii) equilibrium dissociation constant of zinc-receptor binding reaction = 0.680 (KP), and (iii) equilibrium dissociation constant of Zn-PA binding reaction = 0.033 (KR). In the model, total daily absorbed Zn (TAZ) (mg d⁻¹) as a function of total daily nutritional PA (mmole d⁻¹) and total daily nutritional Zn (mmole Zn d⁻¹) was estimated assuming an average wheat flour consumption of 300 g day⁻¹ in the region. Consideration of the PA and Zn intake suggest only 21.5±2.9% of grain Zn is bioavailable so that the effective Zn intake from wheat is only 1.84-2.63 mg d-1 for the local population. Overall results suggest available dietary Zn is below recommended levels (11 mg d⁻¹), partly due to low uptake by wheat but also due to the presence of large concentrations of PA in wheat grains. A crop breeding program combined with enhanced agronomic management methods is needed to enhance both Zn uptake and bioavailability in grains of cultivated wheat types.

Keywords: phosphorus, zinc, phytic acid, phytic acid to zinc molar ratio, zinc bioavailability

Procedia PDF Downloads 119
8853 A Rational Strategy to Maximize the Value-Added Products by Selectively Converting Components of Inferior Heavy Oil

Authors: Kashan Bashir, Salah Naji Ahmed Sufyan, Mirza Umar Baig

Abstract:

In this study, n-dodecane, tetralin, decalin, and tetramethybenzene (TMBE) were used as model compounds of alkanes, naphthenic-aromatic, cycloalkanes and alkyl-benzenes presented in hydro-diesel. The catalytic cracking properties of four model compounds over Y zeolite catalyst (Y-Cat.) and ZSM-5 zeolite catalysts (ZSM-5-Cat.) were probed. The experiment results revealed that high conversion of macromolecular paraffin and naphthenic aromatics were achieved over Y-Cat, whereas its low cracking activity of intermediate products micromolecules paraffin and olefin and high activity of hydride transfer reaction goes against the production of value-added products (light olefin and gasoline). In contrast, despite the fact that the hydride transfer reaction was greatly inhabited over ZSM-5-Cat, the low conversion of macromolecules was observed attributed to diffusion limitations. Interestingly, the mixed catalyst compensates for the shortcomings of the two catalysts, and a “relay reaction” between Y-Cat and ZSM-5-Cat was proposed. Specifically, the added Y-Cat acts as a “pre-cracking booster site” and promotes macromolecules conversion. The addition of ZSM-5-Cat not only significantly suppresses the hydride transfer reaction but also contributes to the cracking of immediate products paraffin and olefin into ethylene and propylene, resulting in a high yield of alkyl-benzene (gasoline), ethylene, and propylene with a low yield of naphthalene (LCO) and coke. The catalytic cracking evaluation experiments of mixed hydro-LCO were also performed to further clarify the “relay reaction” above, showing the highest yield of LPG and gasoline over mixed catalyst. The results indicate that the Y-cat and ZSM-5-cat have a synergistic effect on the conversion of hydro-diesel and corresponding value-added product yield and selective coke yield.

Keywords: synergistic effect, hydro-diesel cracking, FCC, zeolite catalyst, ethylene and propylene

Procedia PDF Downloads 63
8852 Laboratory and Numerical Hydraulic Modelling of Annular Pipe Electrocoagulation Reactors

Authors: Alejandra Martin-Dominguez, Javier Canto-Rios, Velitchko Tzatchkov

Abstract:

Electrocoagulation is a water treatment technology that consists of generating coagulant species in situ by electrolytic oxidation of sacrificial anode materials triggered by electric current. It removes suspended solids, heavy metals, emulsified oils, bacteria, colloidal solids and particles, soluble inorganic pollutants and other contaminants from water, offering an alternative to the use of metal salts or polymers and polyelectrolyte addition for breaking stable emulsions and suspensions. The method essentially consists of passing the water being treated through pairs of consumable conductive metal plates in parallel, which act as monopolar electrodes, commonly known as ‘sacrificial electrodes’. Physicochemical, electrochemical and hydraulic processes are involved in the efficiency of this type of treatment. While the physicochemical and electrochemical aspects of the technology have been extensively studied, little is known about the influence of the hydraulics. However, the hydraulic process is fundamental for the reactions that take place at the electrode boundary layers and for the coagulant mixing. Electrocoagulation reactors can be open (with free water surface) and closed (pressurized). Independently of the type of rector, hydraulic head loss is an important factor for its design. The present work focuses on the study of the total hydraulic head loss and flow velocity and pressure distribution in electrocoagulation reactors with single or multiple concentric annular cross sections. An analysis of the head loss produced by hydraulic wall shear friction and accessories (minor head losses) is presented, and compared to the head loss measured on a semi-pilot scale laboratory model for different flow rates through the reactor. The tests included laminar, transitional and turbulent flow. The observed head loss was compared also to the head loss predicted by several known conceptual theoretical and empirical equations, specific for flow in concentric annular pipes. Four single concentric annular cross section and one multiple concentric annular cross section reactor configuration were studied. The theoretical head loss resulted higher than the observed in the laboratory model in some of the tests, and lower in others of them, depending also on the assumed value for the wall roughness. Most of the theoretical models assume that the fluid elements in all annular sections have the same velocity, and that flow is steady, uniform and one-dimensional, with the same pressure and velocity profiles in all reactor sections. To check the validity of such assumptions, a computational fluid dynamics (CFD) model of the concentric annular pipe reactor was implemented using the ANSYS Fluent software, demonstrating that pressure and flow velocity distribution inside the reactor actually is not uniform. Based on the analysis, the equations that predict better the head loss in single and multiple annular sections were obtained. Other factors that may impact the head loss, such as the generation of coagulants and gases during the electrochemical reaction, the accumulation of hydroxides inside the reactor, and the change of the electrode material with time, are also discussed. The results can be used as tools for design and scale-up of electrocoagulation reactors, to be integrated into new or existing water treatment plants.

Keywords: electrocoagulation reactors, hydraulic head loss, concentric annular pipes, computational fluid dynamics model

Procedia PDF Downloads 215
8851 Future Design and Innovative Economic Models for Futuristic Markets in Developing Countries

Authors: Nessreen Y. Ibrahim

Abstract:

Designing the future according to realistic analytical study for the futuristic market needs can be a milestone strategy to make a huge improvement in developing countries economics. In developing countries, access to high technology and latest science approaches is very limited. The financial problems in low and medium income countries have negative effects on the kind and quality of imported new technologies and application for their markets. Thus, there is a strong need for shifting paradigm thinking in the design process to improve and evolve their development strategy. This paper discusses future possibilities in developing countries, and how they can design their own future according to specific future models FDM (Future Design Models), which established to solve certain economical problems, as well as political and cultural conflicts. FDM is strategic thinking framework provides an improvement in both content and process. The content includes; beliefs, values, mission, purpose, conceptual frameworks, research, and practice, while the process includes; design methodology, design systems, and design managements tools. In this paper the main objective was building an innovative economic model to design a chosen possible futuristic scenario; by understanding the market future needs, analyze real world setting, solve the model questions by future driven design, and finally interpret the results, to discuss to what extent the results can be transferred to the real world. The paper discusses Egypt as a potential case study. Since, Egypt has highly complex economical problems, extra-dynamic political factors, and very rich cultural aspects; we considered Egypt is a very challenging example for applying FDM. The paper results recommended using FDM numerical modeling as a starting point to design the future.

Keywords: developing countries, economic models, future design, possible futures

Procedia PDF Downloads 263
8850 Recovery of Draw Solution in Forward Osmosis by Direct Contact Membrane Distillation

Authors: Su-Thing Ho, Shiao-Shing Chen, Hung-Te Hsu, Saikat Sinha Ray

Abstract:

Forward osmosis (FO) is an emerging technology for direct and indirect potable water reuse application. However, successful implementation of FO is still hindered by the lack of draw solution recovery with high efficiency. Membrane distillation (MD) is a thermal separation process by using hydrophobic microporous membrane that is kept in sandwich mode between warm feed stream and cold permeate stream. Typically, temperature difference is the driving force of MD which attributed by the partial vapor pressure difference across the membrane. In this study, the direct contact membrane distillation (DCMD) system was used to recover diluted draw solution of FO. Na3PO4 at pH 9 and EDTA-2Na at pH 8 were used as the feed solution for MD since it produces high water flux and minimized salt leakage in FO process. At high pH, trivalent and tetravalent ions are much easier to remain at draw solution side in FO process. The result demonstrated that PTFE with pore size of 1 μm could achieve the highest water flux (12.02 L/m2h), followed by PTFE 0.45 μm (10.05 L/m2h), PTFE 0.1 μm (7.38 L/m2h) and then PP (7.17 L/m2h) while using 0.1 M Na3PO4 draw solute. The concentration of phosphate and conductivity in the PTFE (0.45 μm) permeate were low as 1.05 mg/L and 2.89 μm/cm respectively. Although PTFE with the pore size of 1 μm could obtain the highest water flux, but the concentration of phosphate in permeate was higher than other kinds of MD membranes. This study indicated that four kinds of MD membranes performed well and PTFE with the pore size of 0.45 μm was the best among tested membranes to achieve high water flux and high rejection of phosphate (99.99%) in recovery of diluted draw solution. Besides that, the results demonstrate that it can obtain high water flux and high rejection of phosphate when operated with cross flow velocity of 0.103 m/s with Tfeed of 60 ℃ and Tdistillate of 20 ℃. In addition to that, the result shows that Na3PO4 is more suitable for recovery than EDTA-2Na. Besides that, while recovering the diluted Na3PO4, it can obtain the high purity of permeate water. The overall performance indicates that, the utilization of DCMD is a promising technology to recover the diluted draw solution for FO process.

Keywords: membrane distillation, forward osmosis, draw solution, recovery

Procedia PDF Downloads 181