Search results for: strain-based damage model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18231

Search results for: strain-based damage model

9051 Methicillin Resistant Staphylococcus aureus Specific Bacteriophage Isolation from Sewage Treatment Plant and in vivo Analysis of Phage Efficiency in Swiss Albino Mice

Authors: Pratibha Goyal, Nupur Mathur, Anuradha Singh

Abstract:

Antibiotic resistance is the worldwide threat to human health in this century. Excessive use of antibiotic after their discovery in 1940 makes certain bacteria to become resistant against antibiotics. Most common antibiotic-resistant bacteria include Staphylococcus aureus, Salmonella typhi, E.coli, Klebsiella pneumonia, and Streptococcus pneumonia. Among all Staphylococcus resistant strain called Methicillin-resistant Staphylococcus aureus (MRSA) is responsible for several lives threatening infection in human commonly found in the hospital environment. Our study aimed to isolate bacteriophage against MRSA from the hospital sewage treatment plant and to analyze its efficiency In Vivo in Swiss albino mice model. Sewage sample for the isolation of bacteriophages was collected from SDMH hospital sewage treatment plant in Jaipur. Bacteriophages isolated by the use of enrichment technique and after characterization, isolated phages used to determine phage treatment efficiency in mice. Mice model used to check the safety and suitability of phage application in human need which in turn directly support the use of natural bacteriophage rather than synthetic chemical to kill pathogens. Results show the plaque formation in-vitro and recovery of MRSA infected mice during the experiment. Favorable lytic efficiency determination of MRSA and Salmonella presents a natural way to treat lethal infections caused by Multidrug-resistant bacteria by using their natural host-pathogen relationship.

Keywords: antibiotic resistance, bacteriophages, methicillin resistance Staphylococcus aureus, pathogens, phage therapy, Salmonella typhi

Procedia PDF Downloads 124
9050 Design, Development and Analysis of Combined Darrieus and Savonius Wind Turbine

Authors: Ashish Bhattarai, Bishnu Bhatta, Hem Raj Joshi, Nabin Neupane, Pankaj Yadav

Abstract:

This report concerns the design, development, and analysis of the combined Darrieus and Savonius wind turbine. Vertical Axis Wind Turbines (VAWT's) are of two type's viz. Darrieus (lift type) and Savonius (drag type). The problem associated with Darrieus is the lack of self-starting while Savonius has low efficiency. There are 3 straight Darrieus blades having the cross-section of NACA(National Advisory Committee of Aeronautics) 0018 placed circumferentially and a helically twisted Savonius blade to get even torque distribution. This unique design allows the use of Savonius as a method of self-starting the wind turbine, which the Darrieus cannot achieve on its own. All the parts of the wind turbine are designed in CAD software, and simulation data were obtained via CFD(Computational Fluid Dynamics) approach. Also, the design was imported to FlashForge Finder to 3D print the wind turbine profile and finally, testing was carried out. The plastic material used for Savonius was ABS(Acrylonitrile Butadiene Styrene) and that for Darrieus was PLA(Polylactic Acid). From the data obtained experimentally, the hybrid VAWT so fabricated has been found to operate at the low cut-in speed of 3 m/s and maximum power output has been found to be 7.5537 watts at the wind speed of 6 m/s. The maximum rpm of the rotor blade is recorded to be 431 rpm(rotation per minute) at the wind velocity of 6 m/s, signifying its potentiality of wind power production. Besides, the data so obtained from both the process when analyzed through graph plots has shown the similar nature slope wise. Also, the difference between the experimental and theoretical data obtained has shown mechanical losses. The objective is to eliminate the need for external motors for self-starting purposes and study the performance of the model. The testing of the model was carried out for different wind velocities.

Keywords: VAWT, Darrieus, Savonius, helical blades, CFD, flash forge finder, ABS, PLA

Procedia PDF Downloads 185
9049 QSAR Modeling of Germination Activity of a Series of 5-(4-Substituent-Phenoxy)-3-Methylfuran-2(5H)-One Derivatives with Potential of Strigolactone Mimics toward Striga hermonthica

Authors: Strahinja Kovačević, Sanja Podunavac-Kuzmanović, Lidija Jevrić, Cristina Prandi, Piermichele Kobauri

Abstract:

The present study is based on molecular modeling of a series of twelve 5-(4-substituent-phenoxy)-3-methylfuran-2(5H)-one derivatives which have potential of strigolactones mimics toward Striga hermonthica. The first step of the analysis included the calculation of molecular descriptors which numerically describe the structures of the analyzed compounds. The descriptors ALOGP (lipophilicity), AClogS (water solubility) and BBB (blood-brain barrier penetration), served as the input variables in multiple linear regression (MLR) modeling of germination activity toward S. hermonthica. Two MLR models were obtained. The first MLR model contains ALOGP and AClogS descriptors, while the second one is based on these two descriptors plus BBB descriptor. Despite the braking Topliss-Costello rule in the second MLR model, it has much better statistical and cross-validation characteristics than the first one. The ALOGP and AClogS descriptors are often very suitable predictors of the biological activity of many compounds. They are very important descriptors of the biological behavior and availability of a compound in any biological system (i.e. the ability to pass through the cell membranes). BBB descriptor defines the ability of a molecule to pass through the blood-brain barrier. Besides the lipophilicity of a compound, this descriptor carries the information of the molecular bulkiness (its value strongly depends on molecular bulkiness). According to the obtained results of MLR modeling, these three descriptors are considered as very good predictors of germination activity of the analyzed compounds toward S. hermonthica seeds. This article is based upon work from COST Action (FA1206), supported by COST (European Cooperation in Science and Technology).

Keywords: chemometrics, germination activity, molecular modeling, QSAR analysis, strigolactones

Procedia PDF Downloads 266
9048 Comparison of Mcgrath, Pentax, and Macintosh Laryngoscope in Normal and Cervical Immobilized Manikin by Novices

Authors: Jong Yeop Kim, In Kyong Yi, Hyun Jeong Kwak, Sook Young Lee, Sung Yong Park

Abstract:

Background: Several video laryngoscopes (VLs) were used to facilitate tracheal intubation in the normal and potentially difficult airway, especially by novice personnel. The aim of this study was to compare tracheal intubation performance regarding the time to intubation, glottic view, difficulty, and dental click, by a novice using McGrath VL, Pentax Airway Scope (AWS) and Macintosh laryngoscope in normal and cervical immobilized manikin models. Methods: Thirty-five anesthesia nurses without previous intubation experience were recruited. The participants performed endotracheal intubation in a manikin model at two simulated neck positions (normal and fixed neck via cervical immobilization), using three different devices (McGrath VL, Pentax AWS, and Macintosh direct laryngoscope) at three times each. Performance parameters included intubation time, success rate of intubation, Cormack Lehane laryngoscope grading, dental click, and subjective difficulty score. Results: Intubation time and success rate at the first attempt were not significantly different between the 3 groups in normal airway manikin. In the cervical immobilized manikin, the intubation time was shorter (p = 0.012) and the success rate with the first attempt was significantly higher (p < 0.001) when using McGrath VL and Pentax AWS compared with Macintosh laryngoscope. Both VLs showed less difficulty score (p < 0.001) and more Cormack Lehane grade I (p < 0.001). The incidence of dental clicks was higher with McGrath VL than Macintosh laryngoscope in the normal and cervical immobilized airway (p = 0.005, p < 0.001, respectively). Conclusion: McGrath VL and Pentax AWS resulted in shorter intubation time, higher first attempt success rate, compared with Macintosh laryngoscope by a novice intubator in a cervical immobilized manikin model. McGrath VL could be reduced the risk of dental injury compared with Macintosh laryngoscope in this scenario.

Keywords: intubation, manikin, novice, videolaryngoscope

Procedia PDF Downloads 136
9047 Application of Carbon Nanotubes as Cathodic Corrosion Protection of Steel Reinforcement

Authors: M. F. Perez, Ysmael Verde, B. Escobar, R. Barbosa, J. C. Cruz

Abstract:

Reinforced concrete is one of the most important materials in the construction industry. However, in recent years the durability of concrete structures has been a worrying problem, mainly due to corrosion of reinforcing steel; the consequences of corrosion in all cases lead to shortening of the life of the structure and decrease in quality of service. Since the emergence of this problem, they have implemented different methods or techniques to reduce damage by corrosion of reinforcing steel in concrete structures; as the use of polymeric materials as coatings for the steel rod, spiked inhibitors of concrete during mixing, among others, presenting different limitations in the application of these methods. Because of this, it has been used a method that has proved effective, cathodic protection. That is why due to the properties attributed to carbon nanotubes (CNT), these could act as cathodic corrosion protection. Mounting a three-electrode electrochemical cell, carbon steel as working electrode, saturated calomel electrode (SCE) as the reference electrode, and a graphite rod as a counter electrode to close the system is performed. Samples made were subjected to a cycling process in order to compare the results in the corrosion performance of a coating composed of CNT and the others based on an anticorrosive commercial painting. The samples were tested at room temperature using an electrolyte consisting NaCl and NaOH simulating the typical pH of concrete, ranging from 12.6 to 13.9. Three test samples were made of steel rod, white, with commercial anticorrosive paint and CNT based coating; delimiting the work area to a section of 0.71 cm2. Tests cyclic voltammetry and linear voltammetry electrochemical spectroscopy each impedance of the three samples were made with a window of potential vs SCE 0.7 -1.7 a scan rate of 50 mV / s and 100 mV / s. The impedance values were obtained by applying a sine wave of amplitude 50 mV in a frequency range of 100 kHz to 100 MHz. The results obtained in this study show that the CNT based coating applied to the steel rod considerably decreased the corrosion rate compared to the commercial coating of anticorrosive paint, because the Ecorr was passed increase as the cycling process. The samples tested in all three cases were observed by light microscopy throughout the cycling process and micrographic analysis was performed using scanning electron microscopy (SEM). Results from electrochemical measurements show that the application of the coating containing carbon nanotubes on the surface of the steel rod greatly increases the corrosion resistance, compared to commercial anticorrosive coating.

Keywords: anticorrosive, carbon nanotubes, corrosion, steel

Procedia PDF Downloads 463
9046 Re-Constructing the Research Design: Dealing with Problems and Re-Establishing the Method in User-Centered Research

Authors: Kerem Rızvanoğlu, Serhat Güney, Emre Kızılkaya, Betül Aydoğan, Ayşegül Boyalı, Onurcan Güden

Abstract:

This study addresses the re-construction and implementation process of the methodological framework developed to evaluate how locative media applications accompany the urban experiences of international students coming to Istanbul with exchange programs in 2022. The research design was built on a three-stage model. The research team conducted a qualitative questionnaire in the first stage to gain exploratory data. These data were then used to form three persona groups representing the sample by applying cluster analysis. In the second phase, a semi-structured digital diary study was carried out on a gamified task list with a sample selected from the persona groups. This stage proved to be the most difficult to obtaining valid data from the participant group. The research team re-evaluated the design of this second phase to reach the participants who will perform the tasks given by the research team while sharing their momentary city experiences, to ensure the daily data flow for two weeks, and to increase the quality of the obtained data. The final stage, which follows to elaborate on the findings, is the “Walk & Talk,” which is completed with face-to-face and in-depth interviews. It has been seen that the multiple methods used in the research process contribute to the depth and data diversity of the research conducted in the context of urban experience and locative technologies. In addition, by adapting the research design to the experiences of the users included in the sample, the differences and similarities between the initial research design and the research applied are shown.

Keywords: digital diary study, gamification, multi-model research, persona analysis, research design for urban experience, user-centered research, “Walk & Talk”

Procedia PDF Downloads 153
9045 The Carbon Footprint Model as a Plea for Cities towards Energy Transition: The Case of Algiers Algeria

Authors: Hachaichi Mohamed Nour El-Islem, Baouni Tahar

Abstract:

Environmental sustainability rather than a trans-disciplinary and a scientific issue, is the main problem that characterizes all modern cities nowadays. In developing countries, this concern is expressed in a plethora of critical urban ills: traffic congestion, air pollution, noise, urban decay, increase in energy consumption and CO2 emissions which blemish cities’ landscape and might threaten citizens’ health and welfare. As in the same manner as developing world cities, the rapid growth of Algiers’ human population and increasing in city scale phenomena lead eventually to increase in daily trips, energy consumption and CO2 emissions. In addition, the lack of proper and sustainable planning of the city’s infrastructure is one of the most relevant issues from which Algiers suffers. The aim of this contribution is to estimate the carbon deficit of the City of Algiers, Algeria, using the Ecological Footprint Model (carbon footprint). In order to achieve this goal, the amount of CO2 from fuel combustion has been calculated and aggregated into five sectors (agriculture, industry, residential, tertiary and transportation); as well, Algiers’ biocapacity (CO2 uptake land) has been calculated to determine the ecological overshoot. This study shows that Algiers’ transport system is not sustainable and is generating more than 50% of Algiers total carbon footprint which cannot be sequestered by the local forest land. The aim of this research is to show that the Carbon Footprint Assessment might be a relevant indicator to design sustainable strategies/policies striving to reduce CO2 by setting in motion the energy consumption in the transportation sector and reducing the use of fossil fuels as the main energy input.

Keywords: biocapacity, carbon footprint, ecological footprint assessment, energy consumption

Procedia PDF Downloads 131
9044 Detecting Elderly Abuse in US Nursing Homes Using Machine Learning and Text Analytics

Authors: Minh Huynh, Aaron Heuser, Luke Patterson, Chris Zhang, Mason Miller, Daniel Wang, Sandeep Shetty, Mike Trinh, Abigail Miller, Adaeze Enekwechi, Tenille Daniels, Lu Huynh

Abstract:

Machine learning and text analytics have been used to analyze child abuse, cyberbullying, domestic abuse and domestic violence, and hate speech. However, to the authors’ knowledge, no research to date has used these methods to study elder abuse in nursing homes or skilled nursing facilities from field inspection reports. We used machine learning and text analytics methods to analyze 356,000 inspection reports, which have been extracted from CMS Form-2567 field inspections of US nursing homes and skilled nursing facilities between 2016 and 2021. Our algorithm detected occurrences of the various types of abuse, including physical abuse, psychological abuse, verbal abuse, sexual abuse, and passive and active neglect. For example, to detect physical abuse, our algorithms search for combinations or phrases and words suggesting willful infliction of damage (hitting, pinching or burning, tethering, tying), or consciously ignoring an emergency. To detect occurrences of elder neglect, our algorithm looks for combinations or phrases and words suggesting both passive neglect (neglecting vital needs, allowing malnutrition and dehydration, allowing decubiti, deprivation of information, limitation of freedom, negligence toward safety precautions) and active neglect (intimidation and name-calling, tying the victim up to prevent falls without consent, consciously ignoring an emergency, not calling a physician in spite of indication, stopping important treatments, failure to provide essential care, deprivation of nourishment, leaving a person alone for an inappropriate amount of time, excessive demands in a situation of care). We further compare the prevalence of abuse before and after Covid-19 related restrictions on nursing home visits. We also identified the facilities with the most number of cases of abuse with no abuse facilities within a 25-mile radius as most likely candidates for additional inspections. We also built an interactive display to visualize the location of these facilities.

Keywords: machine learning, text analytics, elder abuse, elder neglect, nursing home abuse

Procedia PDF Downloads 124
9043 Understanding the Utilization of Luffa Cylindrica in the Adsorption of Heavy Metals to Clean Up Wastewater

Authors: Akanimo Emene, Robert Edyvean

Abstract:

In developing countries, a low cost method of wastewater treatment is highly recommended. Adsorption is an efficient and economically viable treatment process for wastewater. The utilisation of this process is based on the understanding of the relationship between the growth environment and the metal capacity of the biomaterial. Luffa cylindrica (LC), a plant material, was used as an adsorbent in adsorption design system of heavy metals. The chemically modified LC was used to adsorb heavy metals ions, lead and cadmium, from aqueous environmental solution at varying experimental conditions. Experimental factors, adsorption time, initial metal ion concentration, ionic strength and pH of solution were studied. The chemical nature and surface area of the tissues adsorbing heavy metals in LC biosorption systems were characterised by using electron microscopy and infra-red spectroscopy. It showed an increase in the surface area and improved adhesion capacity after chemical treatment. Metal speciation of the metal ions showed the binary interaction between the ions and the LC surface as the pH increases. Maximum adsorption was shown between pH 5 and pH 6. The ionic strength of the metal ion solution has an effect on the adsorption capacity based on the surface charge and the availability of the adsorption sites on the LC. The nature of the metal-surface complexes formed as a result of the experimental data were analysed with kinetic and isotherm models. The pseudo second order kinetic model and the two-site Langmuir isotherm model showed the best fit. Through the understanding of this process, there will be an opportunity to provide an alternative method for water purification. This will be provide an option, for when expensive water treatment technologies are not viable in developing countries.

Keywords: adsorption, luffa cylindrica, metal-surface complexes, pH

Procedia PDF Downloads 68
9042 Developing a DNN Model for the Production of Biogas From a Hybrid BO-TPE System in an Anaerobic Wastewater Treatment Plant

Authors: Hadjer Sadoune, Liza Lamini, Scherazade Krim, Amel Djouadi, Rachida Rihani

Abstract:

Deep neural networks are highly regarded for their accuracy in predicting intricate fermentation processes. Their ability to learn from a large amount of datasets through artificial intelligence makes them particularly effective models. The primary obstacle in improving the performance of these models is to carefully choose the suitable hyperparameters, including the neural network architecture (number of hidden layers and hidden units), activation function, optimizer, learning rate, and other relevant factors. This study predicts biogas production from real wastewater treatment plant data using a sophisticated approach: hybrid Bayesian optimization with a tree-structured Parzen estimator (BO-TPE) for an optimised deep neural network (DNN) model. The plant utilizes an Upflow Anaerobic Sludge Blanket (UASB) digester that treats industrial wastewater from soft drinks and breweries. The digester has a working volume of 1574 m3 and a total volume of 1914 m3. Its internal diameter and height were 19 and 7.14 m, respectively. The data preprocessing was conducted with meticulous attention to preserving data quality while avoiding data reduction. Three normalization techniques were applied to the pre-processed data (MinMaxScaler, RobustScaler and StandardScaler) and compared with the Non-Normalized data. The RobustScaler approach has strong predictive ability for estimating the volume of biogas produced. The highest predicted biogas volume was 2236.105 Nm³/d, with coefficient of determination (R2), mean absolute error (MAE), and root mean square error (RMSE) values of 0.712, 164.610, and 223.429, respectively.

Keywords: anaerobic digestion, biogas production, deep neural network, hybrid bo-tpe, hyperparameters tuning

Procedia PDF Downloads 21
9041 Collaboration between Grower and Research Organisations as a Mechanism to Improve Water Efficiency in Irrigated Agriculture

Authors: Sarah J. C. Slabbert

Abstract:

The uptake of research as part of the diffusion or adoption of innovation by practitioners, whether individuals or organisations, has been a popular topic in agricultural development studies for many decades. In the classical, linear model of innovation theory, the innovation originates from an expert source such as a state-supported research organisation or academic institution. The changing context of agriculture led to the development of the agricultural innovation systems model, which recognizes innovation as a complex interaction between individuals and organisations, which include private industry and collective action organisations. In terms of this model, an innovation can be developed and adopted without any input or intervention from a state or parastatal research organisation. This evolution in the diffusion of agricultural innovation has put forward new challenges for state or parastatal research organisations, which have to demonstrate the impact of their research to the legislature or a regulatory authority: Unless the organisation and the research it produces cross the knowledge paths of the intended audience, there will be no awareness, no uptake and certainly no impact. It is therefore critical for such a research organisation to base its communication strategy on a thorough understanding of the knowledge needs, information sources and knowledge networks of the intended target audience. In 2016, the South African Water Research Commission (WRC) commissioned a study to investigate the knowledge needs, information sources and knowledge networks of Water User Associations and commercial irrigators with the aim of improving uptake of its research on efficient water use in irrigation. The first phase of the study comprised face-to-face interviews with the CEOs and Board Chairs of four Water User Associations along the Orange River in South Africa, and 36 commercial irrigation farmers from the same four irrigation schemes. Intermediaries who act as knowledge conduits to the Water User Associations and the irrigators were identified and 20 of them were subsequently interviewed telephonically. The study found that irrigators interact regularly with grower organisations such as SATI (South African Table Grape Industry) and SAPPA (South African Pecan Nut Association) and that they perceive these organisations as credible, trustworthy and reliable, within their limitations. State and parastatal research institutions, on the other hand, are associated with a range of negative attributes. As a result, the awareness of, and interest in, the WRC and its research on water use efficiency in irrigated agriculture are low. The findings suggest that a communication strategy that involves collaboration with these grower organisations would empower the WRC to participate much more efficiently and with greater impact in agricultural innovation networks. The paper will elaborate on the findings and discuss partnering frameworks and opportunities to manage perceptions and uptake.

Keywords: agricultural innovation systems, communication strategy, diffusion of innovation, irrigated agriculture, knowledge paths, research organisations, target audiences, water use efficiency

Procedia PDF Downloads 97
9040 Depositional Environment and Diagenetic Alterations, Influences of Facies and Fine Kaolinite Formation Migration on Sandstones’ Reservoir Quality, Sarir Formation, Sirt Basin Libya

Authors: Faraj M. Elkhatri, Hana Ali Allafi

Abstract:

The spatial and temporal distribution of diagenetic alterations related impact on the reservoir quality of the Sarir Formation. (present day burial depth of about 9000 feet) Depositional facies and diagenetic alterations are the main controls on reservoir quality of Sarir Formation Sirt Ba-sin Libya; these based on lithology and grain size as well as authigenic clay mineral types and their distributions. However, petrology investigation obtained on study area with five sandstone wells concentrated on main rock components and the parameters that may have impacts on reservoirs. the main authigenic clay minerals are kaolinite and dickite, these investigations have confirmed by X.R.D analysis and clay fraction. mainly Kaolinite and Dickite were extensively presented on all of wells with high amounts. As well as trace of detrital smectite and less amounts of illitized mud-matrix are possibly find by SEM image. Thin layers of clay presented as clay-grain coatings in local depth interpreted as remains of dissolved clay matrix is partly transformed into kaolinite adjacent and towards pore throat. This also may have impacts on most of the pore throats of this sandstone which are open and relatively clean with some of fine martial have been formed on occluded pores. This material is identified by EDS analysis to be collections of not only kaolinite booklets, but also small, disaggregated kaolinite platelets derived from the dis-aggregation of larger kaolinite booklets. These patches of kaolinite not only fill this pore, but also coat some of the sur-rounding framework grains. Quartz grains often enlarged by authigenic quartz overgrowths partially occlude and re-duce porosity. Scanning Electron Microscopy with Energy Dispersive Spectroscopy (SEM) was conducted on the post-test samples to examine any mud filtrate particles that may be in the pore throats. Semi-qualitative elemental data on select-ed minerals observed during the SEM study were obtained using an Energy Dispersive Spectroscopy (EDS) unit. The samples showed mostly clean open pore throats, with limited occlusion by kaolinite.

Keywords: por throat, formation damage, porosity lose, solids plugging

Procedia PDF Downloads 40
9039 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning

Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher

Abstract:

Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.

Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping

Procedia PDF Downloads 112
9038 In Vivo Antiulcer and Anti-Helicobacter pylori Activity of Geraniol on Acetic Acid plus Helicobacter pylori Induced Ulcer in Rats

Authors: Subrat Kumar Bhattamisra, Vivian Lee Yean Yan, Chin Koh Lee, Chew Hui Kuean, Yun Khoon Liew, Mayuren Candasamy

Abstract:

Geraniol, an acyclic monoterpenoid is the main active constituent in the essential oils of rose and palmorosa. Antioxidant, antibacterial, anticancer and antiulcer activity of geraniol was reported by many researchers. The present investigation was designed to study in vivo antiulcer and anti-Helicobacter pylori activity of geraniol. Antiulcer and anti-H. pylori activity of geraniol was evaluated on acetic acid plus H. pylori induced ulcer in rats. Acetic acid (0.03 mL) was injected to the sub-serosal layer of the stomach through laparotomy under anaesthesia. Orogastric inoculation of H. pylori (ATCC 43504) was done twice daily for 7 days. Geraniol (15 and 30 mg/kg), vehicle and standard drugs (Amoxicillin, 50 mg/kg; clarithromycin, 25 mg/kg & omeprazole, 20 mg/kg) was administered twice daily for 14 days. Antiulcer activity of geraniol was examined by the determination of gastric ulcer index, measuring the volume of gastric juice, pH and total acidity, myeloperoxidase activity and histopathological examination. Histopathological investigation for the presence of inflammation, white blood cell infiltration, edema, the mucosal damage was studied. The presence of H. pylori was detected by placing a biopsy sample from antral part of the stomach into rapid urease test. Ulcer index in H. pylori inoculated control group was 4.13 ± 0.85 and was significantly (P < 0.05) lowered in geraniol (30 mg/kg) and reference drug treated group. Geraniol increase the pH of the gastric juice (2.18 ± 0.13 in control vs. 4.14 ± 0.57 in geraniol 30mg/kg) and lower total acidity significantly (P < 0.01) in geraniol (15 & 30 mg/kg). Myeloperoxidase (MPO) activity was measured in stomach homogenate of all the groups. H. pylori control group has significant (P < 0.05) increase in MPO activity compared to normal control group. Geraniol (30 mg/kg) was showed significant (P < 0.05) and most effective among all the groups. Histopathological examination of rat stomach was scored and the total score for H. pylori control group was 8. After geraniol (30 mg/kg) and reference drug treatment, the histopathological score was significantly decreased and it was observed to be 3.5 and 2.0 respectively. Percentage inhibition of H. pylori infection in geraniol (30 mg/kg) and reference drug were found to be 40% and 50% respectively whereas, 100% infection in H. pylori control group was observed. Geraniol exhibited significant antiulcer and anti- H. pylori activity in the rats. Thus, geraniol has the potential for the further development as an effective medication in treating H. pylori associated ulcer.

Keywords: geraniol, helicobacter pylori atcc 43504, myeloperoxidase, ulcer

Procedia PDF Downloads 323
9037 Investment and Economic Growth: An Empirical Analysis for Tanzania

Authors: Manamba Epaphra

Abstract:

This paper analyzes the causal effect between domestic private investment, public investment, foreign direct investment and economic growth in Tanzania during the 1970-2014 period. The modified neo-classical growth model that includes control variables such as trade liberalization, life expectancy and macroeconomic stability proxied by inflation is used to estimate the impact of investment on economic growth. Also, the economic growth models based on Phetsavong and Ichihashi (2012), and Le and Suruga (2005) are used to estimate the crowding out effect of public investment on private domestic investment on one hand and foreign direct investment on the other hand. A correlation test is applied to check the correlation among independent variables, and the results show that there is very low correlation suggesting that multicollinearity is not a serious problem. Moreover, the diagnostic tests including RESET regression errors specification test, Breusch-Godfrey serial correlation LM test, Jacque-Bera-normality test and white heteroskedasticity test reveal that the model has no signs of misspecification and that, the residuals are serially uncorrelated, normally distributed and homoskedastic. Generally, the empirical results show that the domestic private investment plays an important role in economic growth in Tanzania. FDI also tends to affect growth positively, while control variables such as high population growth and inflation appear to harm economic growth. Results also reveal that control variables such as trade openness and life expectancy improvement tend to increase real GDP growth. Moreover, a revealed negative, albeit weak, association between public and private investment suggests that the positive effect of domestic private investment on economic growth reduces when public investment-to-GDP ratio exceeds 8-10 percent. Thus, there is a great need for promoting domestic saving so as to encourage domestic investment for economic growth.

Keywords: FDI, public investment, domestic private investment, crowding out effect, economic growth

Procedia PDF Downloads 264
9036 Transient Simulation Using SPACE for ATLAS Facility to Investigate the Effect of Heat Loss on Major Parameters

Authors: Suhib A. Abu-Seini, Kyung-Doo Kim

Abstract:

A heat loss model for ATLAS facility was introduced using SPACE code predefined correlations and various dialing factors. As all previous simulations were carried out using a heat loss free input; the facility was considered to be completely insulated and the core power was reduced by the experimentally measured values of heat loss to compensate to the account for the loss of heat, this study will consider heat loss throughout the simulation. The new heat loss model will be affecting SPACE code simulation as heat being leaked out of the system throughout a transient will alter many parameters corresponding to temperature and temperature difference. For that, a Station Blackout followed by a multiple Steam Generator Tube Rupture accident will be simulated using both the insulated system approach and the newly introduced heat loss input of the steady state. Major parameters such as system temperatures, pressure values, and flow rates to be put into comparison and various analysis will be suggested upon it as the experimental values will not be the reference to validate the expected outcome. This study will not only show the significance of heat loss consideration in the processes of prevention and mitigation of various incidents, design basis and beyond accidents as it will give a detailed behavior of ATLAS facility during both processes of steady state and major transient, but will also present a verification of how credible the data acquired of ATLAS are; since heat loss values for steady state were already mismatched between SPACE simulation results and ATLAS data acquiring system. Acknowledgement- This work was supported by the Korean institute of Energy Technology Evaluation and Planning (KETEP) and the Ministry of Trade, Industry & Energy (MOTIE) of the Republic of Korea.

Keywords: ATLAS, heat loss, simulation, SPACE, station blackout, steam generator tube rupture, verification

Procedia PDF Downloads 209
9035 Building Information Management Advantages, Adaptation, and Challenges of Implementation in Kabul Metropolitan Area

Authors: Mohammad Rahim Rahimi, Yuji Hoshino

Abstract:

Building Information Management (BIM) at recent years has widespread consideration on the Architecture, Engineering and Construction (AEC). BIM has been bringing innovation in AEC industry and has the ability to improve the construction industry with high quality, reduction time and budget of project. Meanwhile, BIM support model and process in AEC industry, the process include the project time cycle, estimating, delivery and generally the way of management of project but not limited to those. This research carried the BIM advantages, adaptation and challenges of implementation in Kabul region. Capital Region Independence Development Authority (CRIDA) have responsibilities to implement the development projects in Kabul region. The method of study were considers on advantages and reasons of BIM performance in Afghanistan based on online survey and data. Besides that, five projects were studied, the reason of consideration were many times design revises and changes. Although, most of the projects had problems regard to designing and implementation stage, hence in canal project was discussed in detail with the main reason of problems. Which were many time changes and revises due to the lack of information, planning, and management. In addition, two projects based on BIM utilization in Japan were also discussed. The Shinsuizenji Station and Oita River dam projects. Those are implemented and implementing consequently according to the BIM requirements. The investigation focused on BIM usage, project implementation process. Eventually, the projects were the comparison with CRIDA and BIM utilization in Japan. The comparison will focus on the using of the model and the way of solving the problems based upon on the BIM. In conclusion, that BIM had the capacity to prevent many times design changes and revises. On behalf of achieving those objectives are required to focus on data management and sharing, BIM training and using new technology.

Keywords: construction information management, implementation and adaptation of BIM, project management, developing countries

Procedia PDF Downloads 104
9034 The Feasibility of Glycerol Steam Reforming in an Industrial Sized Fixed Bed Reactor Using Computational Fluid Dynamic (CFD) Simulations

Authors: Mahendra Singh, Narasimhareddy Ravuru

Abstract:

For the past decade, the production of biodiesel has significantly increased along with its by-product, glycerol. Biodiesel-derived glycerol massive entry into the glycerol market has caused its value to plummet. Newer ways to utilize the glycerol by-product must be implemented or the biodiesel industry will face serious economic problems. The biodiesel industry should consider steam reforming glycerol to produce hydrogen gas. Steam reforming is the most efficient way of producing hydrogen and there is a lot of demand for it in the petroleum and chemical industries. This study investigates the feasibility of glycerol steam reforming in an industrial sized fixed bed reactor. In this paper, using computational fluid dynamic (CFD) simulations, the extent of the transport resistances that would occur in an industrial sized reactor can be visualized. An important parameter in reactor design is the size of the catalyst particle. The size of the catalyst cannot be too large where transport resistances are too high, but also not too small where an extraordinary amount of pressure drop occurs. The goal of this paper is to find the best catalyst size under various flow rates that will result in the highest conversion. Computational fluid dynamics simulated the transport resistances and a pseudo-homogenous reactor model was used to evaluate the pressure drop and conversion. CFD simulations showed that glycerol steam reforming has strong internal diffusion resistances resulting in extremely low effectiveness factors. In the pseudo-homogenous reactor model, the highest conversion obtained with a Reynolds number of 100 (29.5 kg/h) was 9.14% using a 1/6 inch catalyst diameter. Due to the low effectiveness factors and high carbon deposition rates, a fluidized bed is recommended as the appropriate reactor to carry out glycerol steam reforming.

Keywords: computational fluid dynamic, fixed bed reactor, glycerol, steam reforming, biodiesel

Procedia PDF Downloads 287
9033 Human Beta Defensin 1 as Potential Antimycobacterial Agent against Active and Dormant Tubercle Bacilli

Authors: Richa Sharma, Uma Nahar, Sadhna Sharma, Indu Verma

Abstract:

Counteracting the deadly pathogen Mycobacterium tuberculosis (M. tb) effectively is still a global challenge. Scrutinizing alternative weapons like antimicrobial peptides to strengthen existing tuberculosis artillery is urgently required. Considering the antimycobacterial potential of Human Beta Defensin 1 (HBD-1) along with isoniazid, the present study was designed to explore the ability of HBD-1 to act against active and dormant M. tb. HBD-1 was screened in silico using antimicrobial peptide prediction servers to identify its short antimicrobial motif. The activity of both HBD-1 and its selected motif (Pep B) was determined at different concentrations against actively growing M. tb in vitro and ex vivo in monocyte derived macrophages (MDMs). Log phase M. tb was grown along with HBD-1 and Pep B for 7 days. M. tb infected MDMs were treated with HBD-1 and Pep B for 72 hours. Thereafter, colony forming unit (CFU) enumeration was performed to determine activity of both peptides against actively growing in vitro and intracellular M. tb. The dormant M. tb models were prepared by following two approaches and treated with different concentrations of HBD-1 and Pep B. Firstly, 20-22 days old M. tbH37Rv was grown in potassium deficient Sauton media for 35 days. The presence of dormant bacilli was confirmed by Nile red staining. Dormant bacilli were further treated with rifampicin, isoniazid, HBD-1 and its motif for 7 days. The effect of both peptides on latent bacilli was assessed by colony forming units (CFU) and most probable number (MPN) enumeration. Secondly, human PBMC granuloma model was prepared by infecting PBMCs seeded on collagen matrix with M. tb(MOI 0.1) for 10 days. Histopathology was done to confirm granuloma formation. The granuloma thus formed was incubated for 72 hours with rifampicin, HBD-1 and Pep B individually. Difference in bacillary load was determined by CFU enumeration. The minimum inhibitory concentrations of HBD-1 and Pep B restricting growth of mycobacteria in vitro were 2μg/ml and 20μg/ml respectively. The intracellular mycobacterial load was reduced significantly by HBD-1 and Pep B at 1μg/ml and 5μg/ml respectively. Nile red positive bacterial population, high MPN/ low CFU count and tolerance to isoniazid, confirmed the formation of potassium deficienybaseddormancy model. HBD-1 (8μg/ml) showed 96% and 99% killing and Pep B (40μg/ml) lowered dormant bacillary load by 68.89% and 92.49% based on CFU and MPN enumeration respectively. Further, H&E stained aggregates of macrophages and lymphocytes, acid fast bacilli surrounded by cellular aggregates and rifampicin resistance, indicated the formation of human granuloma dormancy model. HBD-1 (8μg/ml) led to 81.3% reduction in CFU whereas its motif Pep B (40μg/ml) showed only 54.66% decrease in bacterial load inside granuloma. Thus, the present study indicated that HBD-1 and its motif are effective antimicrobial players against both actively growing and dormant M. tb. They should be further explored to tap their potential to design a powerful weapon for combating tuberculosis.

Keywords: antimicrobial peptides, dormant, human beta defensin 1, tuberculosis

Procedia PDF Downloads 246
9032 Selecting the Best Risk Exposure to Assess Collision Risks in Container Terminals

Authors: Mohammad Ali Hasanzadeh, Thierry Van Elslander, Eddy Van De Voorde

Abstract:

About 90 percent of world merchandise trade by volume being carried by sea. Maritime transport remains as back bone behind the international trade and globalization meanwhile all seaborne goods need using at least two ports as origin and destination. Amid seaborne traded cargos, container traffic is a prosperous market with about 16% in terms of volume. Albeit containerized cargos are less in terms of tonnage but, containers carry the highest value cargos amongst all. That is why efficient handling of containers in ports is very important. Accidents are the foremost causes that lead to port inefficiency and a surge in total transport cost. Having different port safety management systems (PSMS) in place, statistics on port accidents show that numerous accidents occur in ports. Some of them claim peoples’ life; others damage goods, vessels, port equipment and/or the environment. Several accident investigation illustrate that the most common accidents take place throughout transport operation, it sometimes accounts for 68.6% of all events, therefore providing a safer workplace depends on reducing collision risk. In order to quantify risks at the port area different variables can be used as exposure measurement. One of the main motives for defining and using exposure in studies related to infrastructure is to account for the differences in intensity of use, so as to make comparisons meaningful. In various researches related to handling containers in ports and intermodal terminals, different risk exposures and also the likelihood of each event have been selected. Vehicle collision within the port area (10-7 per kilometer of vehicle distance travelled) and dropping containers from cranes, forklift trucks, or rail mounted gantries (1 x 10-5 per lift) are some examples. According to the objective of the current research, three categories of accidents selected for collision risk assessment; fall of container during ship to shore operation, dropping container during transfer operation and collision between vehicles and objects within terminal area. Later on various consequences, exposure and probability identified for each accident. Hence, reducing collision risks profoundly rely on picking the right risk exposures and probability of selected accidents, to prevent collision accidents in container terminals and in the framework of risk calculations, such risk exposures and probabilities can be useful in assessing the effectiveness of safety programs in ports.

Keywords: container terminal, collision, seaborne trade, risk exposure, risk probability

Procedia PDF Downloads 353
9031 Analysis of Two-Echelon Supply Chain with Perishable Items under Stochastic Demand

Authors: Saeed Poormoaied

Abstract:

Perishability and developing an intelligent control policy for perishable items are the major concerns of marketing managers in a supply chain. In this study, we address a two-echelon supply chain problem for perishable items with a single vendor and a single buyer. The buyer adopts an aged-based continuous review policy which works by taking both the stock level and the aging process of items into account. The vendor works under the warehouse framework, where its lot size is determined with respect to the batch size of the buyer. The model holds for a positive and fixed lead time for the buyer, and zero lead time for the vendor. The demand follows a Poisson process and any unmet demand is lost. We provide exact analytic expressions for the operational characteristics of the system by using the renewal reward theorem. Items have a fixed lifetime after which they become unusable and are disposed of from the buyer's system. The age of items starts when they are unpacked and ready for the consumption at the buyer. When items are held by the vendor, there is no aging process which results in no perishing at the vendor's site. The model is developed under the centralized framework, which takes the expected profit of both vendor and buyer into consideration. The goal is to determine the optimal policy parameters under the service level constraint at the retailer's site. A sensitivity analysis is performed to investigate the effect of the key input parameters on the expected profit and order quantity in the supply chain. The efficiency of the proposed age-based policy is also evaluated through a numerical study. Our results show that when the unit perishing cost is negligible, a significant cost saving is achieved.

Keywords: two-echelon supply chain, perishable items, age-based policy, renewal reward theorem

Procedia PDF Downloads 129
9030 A Study on the Correlation Analysis between the Pre-Sale Competition Rate and the Apartment Unit Plan Factor through Machine Learning

Authors: Seongjun Kim, Jinwooung Kim, Sung-Ah Kim

Abstract:

The development of information and communication technology also affects human cognition and thinking, especially in the field of design, new techniques are being tried. In architecture, new design methodologies such as machine learning or data-driven design are being applied. In particular, these methodologies are used in analyzing the factors related to the value of real estate or analyzing the feasibility in the early planning stage of the apartment housing. However, since the value of apartment buildings is often determined by external factors such as location and traffic conditions, rather than the interior elements of buildings, data is rarely used in the design process. Therefore, although the technical conditions are provided, the internal elements of the apartment are difficult to apply the data-driven design in the design process of the apartment. As a result, the designers of apartment housing were forced to rely on designer experience or modular design alternatives rather than data-driven design at the design stage, resulting in a uniform arrangement of space in the apartment house. The purpose of this study is to propose a methodology to support the designers to design the apartment unit plan with high consumer preference by deriving the correlation and importance of the floor plan elements of the apartment preferred by the consumers through the machine learning and reflecting this information from the early design process. The data on the pre-sale competition rate and the elements of the floor plan are collected as data, and the correlation between pre-sale competition rate and independent variables is analyzed through machine learning. This analytical model can be used to review the apartment unit plan produced by the designer and to assist the designer. Therefore, it is possible to make a floor plan of apartment housing with high preference because it is possible to feedback apartment unit plan by using trained model when it is used in floor plan design of apartment housing.

Keywords: apartment unit plan, data-driven design, design methodology, machine learning

Procedia PDF Downloads 244
9029 The Stereotypical Images of Marginalized Women in the Poetry of Rita Dove

Authors: Wafaa Kamal Isaac

Abstract:

This paper attempts to shed light upon the stereotypical images of marginalized black women as shown through the poetry of Rita Dove. Meanwhile, it explores how stereotypical images held by the society and public perceptions perpetuate the marginalization of black women. Dove is considered one of the most fundamental African-American poets who devoted her writings to explore the problem of identity that confronted marginalized women in America. Besides tackling the issue of black women’s stereotypical images, this paper focuses upon the psychological damage which the black women had suffered from due to their stripped identity. In ‘Thomas and Beulah’, Dove reflects the black woman’s longing for her homeland in order to make up for her lost identity. This poem represents atavistic feelings deal with certain recurrent images, both aural and visual, like the image of Beulah who represents the African-American woman who searches for an identity, as she is being denied and humiliated one in the newly founded society. In an attempt to protest against the stereotypical mule image that had been imposed upon black women in America, Dove in ‘On the Bus with Rosa Parks’ tries to ignite the beaten spirits to struggle for their own rights by revitalizing the rebellious nature and strong determination of the historical figure ‘Rosa Parks’ that sparked the Civil Rights Movement. In ‘Daystar’, Dove proves that black women are subjected to double-edged oppression; firstly, in terms of race as a black woman in an unjust white society that violates her rights due to her black origins and secondly, in terms of gender as a member of the female sex that is meant to exist only to serve man’s needs. Similarly, in the ‘Adolescence’ series, Dove focuses on the double marginalization which the black women had experienced. It concludes that the marginalization of black women has resulted from the domination of the masculine world and the oppression of the white world. Moreover, Dove’s ‘Beauty and the Beast’ investigates the African-American women’s problem of estrangement and identity crisis in America. It also sheds light upon the psychological consequences that resulted from the violation of marginalized women’s identity. Furthermore, this poem shows the black women’s self-debasement, helplessness, and double consciousness that emanate from the sense of uprootedness. Finally, this paper finds out that the negative, debased and inferior stereotypical image held by the society did not only contribute to the marginalization of black women but also silenced and muted their voices.

Keywords: stereotypical images, marginalized women, Rita Dove, identity

Procedia PDF Downloads 138
9028 Evaluating the Capability of the Flux-Limiter Schemes in Capturing the Turbulence Structures in a Fully Developed Channel Flow

Authors: Mohamed Elghorab, Vendra C. Madhav Rao, Jennifer X. Wen

Abstract:

Turbulence modelling is still evolving, and efforts are on to improve and develop numerical methods to simulate the real turbulence structures by using the empirical and experimental information. The monotonically integrated large eddy simulation (MILES) is an attractive approach for modelling turbulence in high Re flows, which is based on the solving of the unfiltered flow equations with no explicit sub-grid scale (SGS) model. In the current work, this approach has been used, and the action of the SGS model has been included implicitly by intrinsic nonlinear high-frequency filters built into the convection discretization schemes. The MILES solver is developed using the opensource CFD OpenFOAM libraries. The role of flux limiters schemes namely, Gamma, superBee, van-Albada and van-Leer, is studied in predicting turbulent statistical quantities for a fully developed channel flow with a friction Reynolds number, ReT = 180, and compared the numerical predictions with the well-established Direct Numerical Simulation (DNS) results for studying the wall generated turbulence. It is inferred from the numerical predictions that Gamma, van-Leer and van-Albada limiters produced more diffusion and overpredicted the velocity profiles, while superBee scheme reproduced velocity profiles and turbulence statistical quantities in good agreement with the reference DNS data in the streamwise direction although it deviated slightly in the spanwise and normal to the wall directions. The simulation results are further discussed in terms of the turbulence intensities and Reynolds stresses averaged in time and space to draw conclusion on the flux limiter schemes performance in OpenFOAM context.

Keywords: flux limiters, implicit SGS, MILES, OpenFOAM, turbulence statistics

Procedia PDF Downloads 168
9027 Numerical Prediction of Width Crack of Concrete Dapped-End Beams

Authors: Jatziri Y. Moreno-Martinez, Arturo Galvan, Xavier Chavez Cardenas, Hiram Arroyo

Abstract:

Several methods have been utilized to study the prediction of cracking of concrete structural under loading. The finite element analysis is an alternative that shows good results. The aim of this work was the numerical study of the width crack in reinforced concrete beams with dapped ends, these are frequently found in bridge girders and precast concrete construction. Properly restricting cracking is an important aspect of the design in dapped ends, it has been observed that the cracks that exceed the allowable widths are unacceptable in an aggressive environment for reinforcing steel. For simulating the crack width, the discrete crack approach was considered by means of a Cohesive Zone (CZM) Model using a function to represent the crack opening. Two cases of dapped-end were constructed and tested in the laboratory of Structures and Materials of Engineering Institute of UNAM. The first case considers a reinforcement based on hangers as well as on vertical and horizontal ring, the second case considers 50% of the vertical stirrups in the dapped end to the main part of the beam were replaced by an equivalent area (vertically projected) of diagonal bars under. The loading protocol consisted on applying symmetrical loading to reach the service load. The models were performed using the software package ANSYS v. 16.2. The concrete structure was modeled using three-dimensional solid elements SOLID65 capable of cracking in tension and crushing in compression. Drucker-Prager yield surface was used to include the plastic deformations. The reinforcement was introduced with smeared approach. Interface delamination was modeled by traditional fracture mechanics methods such as the nodal release technique adopting softening relationships between tractions and the separations, which in turn introduce a critical fracture energy that is also the energy required to break apart the interface surfaces. This technique is called CZM. The interface surfaces of the materials are represented by a contact elements Surface-to-Surface (CONTA173) with bonded (initial contact). The Mode I dominated bilinear CZM model assumes that the separation of the material interface is dominated by the displacement jump normal to the interface. Furthermore, the opening crack was taken into consideration according to the maximum normal contact stress, the contact gap at the completion of debonding, and the maximum equivalent tangential contact stress. The contact elements were placed in the crack re-entrant corner. To validate the proposed approach, the results obtained with the previous procedure are compared with experimental test. A good correlation between the experimental and numerical Load-Displacement curves was presented, the numerical models also allowed to obtain the load-crack width curves. In these two cases, the proposed model confirms the capability of predicting the maximum crack width, with an error of ± 30 %. Finally, the orientation of the crack is a fundamental for the prediction of crack width. The results regarding the crack width can be considered as good from the practical point view. Load-Displacement curve of the test and the location of the crack were able to obtain favorable results.

Keywords: cohesive zone model, dapped-end beams, discrete crack approach, finite element analysis

Procedia PDF Downloads 147
9026 How Message Framing and Temporal Distance Affect Word of Mouth

Authors: Camille Lacan, Pierre Desmet

Abstract:

In the crowdfunding model, a campaign succeeds by collecting the funds required over a predefined duration. The success of a CF campaign depends both on the capacity to attract members of the online communities concerned, and on the community members’ involvement in online word-of-mouth recommendations. To maximize the campaign's success probability, project creators (i.e., an organization appealing for financial resources) send messages to contributors to ask them to issue word of mouth. Internet users relay information about projects through Word of Mouth which is defined as “a critical tool for facilitating information diffusion throughout online communities”. The effectiveness of these messages depends on the message framing and the time at which they are sent to contributors (i.e., at the start of the campaign or close to the deadline). This article addresses the following question: What are the effect of message framing and temporal distance on the willingness to share word of mouth? Drawing on Perspectives Theory and Construal Level Theory, this study examines the interplay between message framing (Gains vs. Losses) and temporal distance (message while the deadline is coming vs. far) on intention to share word of mouth. A between-subject experimental design is conducted to test the research model. Results show significant differences between a loss-framed message (lack of benefits if the campaign fails) associated with a short deadline (ending tomorrow) compared to a gain-framed message (benefits if the campaign succeeds) associated with a distant deadline (ending in three months). However, this effect is moderated by the anticipated regret of a campaign failure and the temporal orientation. These moderating effects contribute to specifying the boundary condition of the framing effect. Handling the message framing and the temporal distance are thus the key decisions to influence the willingness to share word of mouth.

Keywords: construal levels, crowdfunding, message framing, word of mouth

Procedia PDF Downloads 230
9025 Phenomena-Based Approach for Automated Generation of Process Options and Process Models

Authors: Parminder Kaur Heer, Alexei Lapkin

Abstract:

Due to global challenges of increased competition and demand for more sustainable products/processes, there is a rising pressure on the industry to develop innovative processes. Through Process Intensification (PI) the existing and new processes may be able to attain higher efficiency. However, very few PI options are generally considered. This is because processes are typically analysed at a unit operation level, thus limiting the search space for potential process options. PI performed at more detailed levels of a process can increase the size of the search space. The different levels at which PI can be achieved is unit operations, functional and phenomena level. Physical/chemical phenomena form the lowest level of aggregation and thus, are expected to give the highest impact because all the intensification options can be described by their enhancement. The objective of the current work is thus, generation of numerous process alternatives based on phenomena, and development of their corresponding computer aided models. The methodology comprises: a) automated generation of process options, and b) automated generation of process models. The process under investigation is disintegrated into functions viz. reaction, separation etc., and these functions are further broken down into the phenomena required to perform them. E.g., separation may be performed via vapour-liquid or liquid-liquid equilibrium. A list of phenomena for the process is formed and new phenomena, which can overcome the difficulties/drawbacks of the current process or can enhance the effectiveness of the process, are added to the list. For instance, catalyst separation issue can be handled by using solid catalysts; the corresponding phenomena are identified and added. The phenomena are then combined to generate all possible combinations. However, not all combinations make sense and, hence, screening is carried out to discard the combinations that are meaningless. For example, phase change phenomena need the co-presence of the energy transfer phenomena. Feasible combinations of phenomena are then assigned to the functions they execute. A combination may accomplish a single or multiple functions, i.e. it might perform reaction or reaction with separation. The combinations are then allotted to the functions needed for the process. This creates a series of options for carrying out each function. Combination of these options for different functions in the process leads to the generation of superstructure of process options. These process options, which are formed by a list of phenomena for each function, are passed to the model generation algorithm in the form of binaries (1, 0). The algorithm gathers the active phenomena and couples them to generate the model. A series of models is generated for the functions, which are combined to get the process model. The most promising process options are then chosen subjected to a performance criterion, for example purity of product, or via a multi-objective Pareto optimisation. The methodology was applied to a two-step process and the best route was determined based on the higher product yield. The current methodology can identify, produce and evaluate process intensification options from which the optimal process can be determined. It can be applied to any chemical/biochemical process because of its generic nature.

Keywords: Phenomena, Process intensification, Process models , Process options

Procedia PDF Downloads 219
9024 The Microstructural and Mechanical Characterization of Organo-Clay-Modified Bitumen, Calcareous Aggregate, and Organo-Clay Blends

Authors: A. Gürses, T. B. Barın, Ç. Doğar

Abstract:

Bitumen has been widely used as the binder of aggregate in road pavement due to its good viscoelastic properties, as a viscous organic mixture with various chemical compositions. Bitumen is a liquid at high temperature and it becomes brittle at low temperatures, and this temperature-sensitivity can cause the rutting and cracking of the pavement and limit its application. Therefore, the properties of existing asphalt materials need to be enhanced. The pavement with polymer modified bitumen exhibits greater resistance to rutting and thermal cracking, decreased fatigue damage, as well as stripping and temperature susceptibility; however, they are expensive and their applications have disadvantages. Bituminous mixtures are composed of very irregular aggregates bound together with hydrocarbon-based asphalt, with a low volume fraction of voids dispersed within the matrix. Montmorillonite (MMT) is a layered silicate with low cost and abundance, which consists of layers of tetrahedral silicate and octahedral hydroxide sheets. Recently, the layered silicates have been widely used for the modification of polymers, as well as in many different fields. However, there are not too much studies related with the preparation of the modified asphalt with MMT, currently. In this study, organo-clay-modified bitumen, and calcareous aggregate and organo-clay blends were prepared by hot blending method with OMMT, which has been synthesized using a cationic surfactant (Cetyltrymethylammonium bromide, CTAB) and long chain hydrocarbon, and MMT. When the exchangeable cations in the interlayer region of pristine MMT were exchanged with hydrocarbon attached surfactant ions, the MMT becomes organophilic and more compatible with bitumen. The effects of the super hydrophobic OMMT onto the micro structural and mechanic properties (Marshall Stability and volumetric parameters) of the prepared blends were investigated. Stability and volumetric parameters of the blends prepared were measured using Marshall Test. Also, in order to investigate the morphological and micro structural properties of the organo-clay-modified bitumen and calcareous aggregate and organo-clay blends, their SEM and HRTEM images were taken. It was observed that the stability and volumetric parameters of the prepared mixtures improved significantly compared to the conventional hot mixes and even the stone matrix mixture. A micro structural analysis based on SEM images indicates that the organo-clay platelets dispersed in the bitumen have a dominant role in the increase of effectiveness of bitumen - aggregate interactions.

Keywords: hot mix asphalt, stone matrix asphalt, organo clay, Marshall test, calcareous aggregate, modified bitumen

Procedia PDF Downloads 217
9023 Looking beyond Corporate Social Responsibility to Sustainable Development: Conceptualisation and Theoretical Exploration

Authors: Mercy E. Makpor

Abstract:

Traditional Corporate Social Responsibility (CSR) idea has gone beyond just ensuring safety environments, caring about global warming and ensuring good living standards and conditions for the society at large. The paradigm shift is towards a focus on strategic objectives and the long-term value creation for both businesses and the society at large for a realistic future. As an important approach to solving social and environment issues, CSR has been accepted globally. Yet the approach is expected to go beyond where it is currently. So much is expected from businesses and governments at every level globally and locally. This then leads to the original idea of the concept, that is, how it originated and how it has been perceived over the years. Little wonder there has been a lot of definitions surrounding the concept without a major globally acceptable definition of it. The definition of CSR given by the European Commission will be considered for the purpose of this paper. Sustainable Development (SD), on the other hand, has been viewed in recent years as an ethical concept explained in the UN-Report termed “Our Common Future,” which can also be referred to as the Brundtland report. The report summarises the need for SD to take place in the present without comprising the future. However, the recent 21st-century framework on sustainability known as the “Triple Bottom Line (TBL)” framework, has added its voice to the concepts of CSR and sustainable development. The TBL model is of the opinion that businesses should not only report on their financial performance but also on their social and environmental performances, highlighting that CSR has gone beyond just the “material-impact” approach towards a “Future-Oriented” approach (sustainability). In this paper, the concept of CSR is revisited by exploring the various theories therein. The discourse on the concepts of sustainable development and sustainable development frameworks will also be indicated, thereby inducing these into how CSR can benefit both businesses and their stakeholders as well as the entirety of the society, not just for the present but for the future. It does this by exploring the importance of both concepts (CSR and SD) and concludes by making recommendations for a more empirical research in the near future.

Keywords: corporate social responsibility, sustainable development, sustainability, triple bottom line model

Procedia PDF Downloads 232
9022 Evolution of Relations among Multiple Institutional Logics: A Case Study from a Higher Education Institution

Authors: Ye Jiang

Abstract:

To examine how the relationships among multiple institutional logics vary over time and the factors that may impact this process, we conducted a 15-year in-depth longitudinal case study of a Higher Education Institution to examine its exploration in college student management. By employing constructive grounded theory, we developed a four-stage process model comprising separation, formalization, selective bridging, and embeddedness that showed how two contradictory logics become complementary, and finally become a new hybridized logic. We argue that selective bridging is an important step in changing inter-logic relations. We also found that ambidextrous leadership and situational sensemaking are two key factors that drive this process. Our contribution to the literature is threefold. First, we enhance the literature on the changing relationships among multiple institutional logics and our findings advance the understanding of relationships between multiple logics through a dynamic view. While most studies have tended to assume that the relationship among logics is static and persistently in a contentious state, we contend that the relationships among multiple institutional logics can change over time. Competitive logics can become complementary, and a new hybridized logic can emerge therefrom. The four-stage logic hybridization process model offers insights on the logic hybridization process, which is underexplored in the literature. Second, our research reveals that selective bridging is important in making conflicting logics compatible, and thus constitutes a key step in creating new hybridized logic dynamics. Our findings suggest that the relations between multiple logics are manageable and can thus be manipulated for organizational innovation. Finally, the factors influencing the variations in inter-logic relations enrich the understanding of the antecedents of these dynamics.

Keywords: institutional theory, institutional logics, ambidextrous leadership, situational sensemaking

Procedia PDF Downloads 126