Search results for: booking amount
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4012

Search results for: booking amount

3472 Effects of Earthquake Induced Debris to Pedestrian and Community Street Network Resilience

Authors: Al-Amin, Huanjun Jiang, Anayat Ali

Abstract:

Reinforced concrete frames (RC), especially Ordinary RC frames, are prone to structural failures/collapse during seismic events, leading to a large proportion of debris from the structures, which obstructs adjacent areas, including streets. These blocked areas severely impede post-earthquake resilience. This study uses computational simulation (FEM) to investigate the amount of debris generated by the seismic collapse of an ordinary reinforced concrete moment frame building and its effects on the adjacent pedestrian and road network. A three-story ordinary reinforced concrete frame building, primarily designed for gravity load and earthquake resistance, was selected for analysis. Sixteen different ground motions were applied and scaled up until the total collapse of the tested building to evaluate the failure mode under various seismic events. Four types of collapse direction were identified through the analysis, namely aligned (positive and negative) and skewed (positive and negative), with aligned collapse being more predominant than skewed cases. The amount and distribution of debris around the collapsed building were assessed to investigate the interaction between collapsed buildings and adjacent street networks. An interaction was established between a building that collapsed in an aligned direction and the adjacent pedestrian walkway and narrow street located in an unplanned old city. The FEM model was validated against an existing shaking table test. The presented results can be utilized to simulate the interdependency between the debris generated from the collapse of seismic-prone buildings and the resilience of street networks. These findings provide insights for better disaster planning and resilient infrastructure development in earthquake-prone regions.

Keywords: building collapse, earthquake-induced debris, ORC moment resisting frame, street network

Procedia PDF Downloads 62
3471 Effect of Impact Angle on Erosive Abrasive Wear of Ductile and Brittle Materials

Authors: Ergin Kosa, Ali Göksenli

Abstract:

Erosion and abrasion are wear mechanisms reducing the lifetime of machine elements like valves, pump and pipe systems. Both wear mechanisms are acting at the same time, causing a “Synergy” effect, which leads to a rapid damage of the surface. Different parameters are effective on erosive abrasive wear rate. In this study effect of particle impact angle on wear rate and wear mechanism of ductile and brittle materials was investigated. A new slurry pot was designed for experimental investigation. As abrasive particle, silica sand was used. Particle size was ranking between 200-500 µm. All tests were carried out in a sand-water mixture of 20% concentration for four hours. Impact velocities of the particles were 4,76 m/s. As ductile material steel St 37 with Brinell Hardness Number (BHN) of 245 and quenched St 37 with 510 BHN was used as brittle material. After wear tests, morphology of the eroded surfaces were investigated for better understanding of the wear mechanisms acting at different impact angles by using optical microscopy and Scanning Electron Microscope. The results indicated that wear rate of ductile material was higher than brittle material. Maximum wear was observed by ductile material at a particle impact angle of 300. On the contrary wear rate increased by brittle materials by an increase in impact angle and reached maximum value at 450. High amount of craters were detected after observation on ductile material surface Also plastic deformation zones were detected, which are typical failure modes for ductile materials. Craters formed by particles were deeper according to brittle material worn surface. Amount of craters decreased on brittle material surface. Microcracks around craters were detected which are typical failure modes of brittle materials. Deformation wear was the dominant wear mechanism on brittle material. At the end it is concluded that wear rate could not be directly related to impact angle of the hard particle due to the different responses of ductile and brittle materials.

Keywords: erosive wear, particle impact angle, silica sand, wear rate, ductile-brittle material

Procedia PDF Downloads 361
3470 Remote Sensing and Geographic Information Systems for Identifying Water Catchments Areas in the Northwest Coast of Egypt for Sustainable Agricultural Development

Authors: Mohamed Aboelghar, Ayman Abou Hadid, Usama Albehairy, Asmaa Khater

Abstract:

Sustainable agricultural development of the desert areas of Egypt under the pressure of irrigation water scarcity is a significant national challenge. Existing water harvesting techniques on the northwest coast of Egypt do not ensure the optimal use of rainfall for agricultural purposes. Basin-scale hydrology potentialities were studied to investigate how available annual rainfall could be used to increase agricultural production. All data related to agricultural production included in the form of geospatial layers. Thematic classification of Sentinal-2 imagery was carried out to produce the land cover and crop maps following the (FAO) system of land cover classification. Contour lines and spot height points were used to create a digital elevation model (DEM). Then, DEM was used to delineate basins, sub-basins, and water outlet points using the Soil and Water Assessment Tool (Arc SWAT). Main soil units of the study area identified from Land Master Plan maps. Climatic data collected from existing official sources. The amount of precipitation, surface water runoff, potential, and actual evapotranspiration for the years (2004 to 2017) shown as results of (Arc SWAT). The land cover map showed that the two tree crops (olive and fig) cover 195.8 km2 when herbaceous crops (barley and wheat) cover 154 km2. The maximum elevation was 250 meters above sea level when the lowest one was 3 meters below sea level. The study area receives a massive variable amount of precipitation; however, water harvesting methods are inappropriate to store water for purposes.

Keywords: water catchements, remote sensing, GIS, sustainable agricultural development

Procedia PDF Downloads 94
3469 In Silico Modeling of Drugs Milk/Plasma Ratio in Human Breast Milk Using Structures Descriptors

Authors: Navid Kaboudi, Ali Shayanfar

Abstract:

Introduction: Feeding infants with safe milk from the beginning of their life is an important issue. Drugs which are used by mothers can affect the composition of milk in a way that is not only unsuitable, but also toxic for infants. Consuming permeable drugs during that sensitive period by mother could lead to serious side effects to the infant. Due to the ethical restrictions of drug testing on humans, especially women, during their lactation period, computational approaches based on structural parameters could be useful. The aim of this study is to develop mechanistic models to predict the M/P ratio of drugs during breastfeeding period based on their structural descriptors. Methods: Two hundred and nine different chemicals with their M/P ratio were used in this study. All drugs were categorized into two groups based on their M/P value as Malone classification: 1: Drugs with M/P>1, which are considered as high risk 2: Drugs with M/P>1, which are considered as low risk Thirty eight chemical descriptors were calculated by ACD/labs 6.00 and Data warrior software in order to assess the penetration during breastfeeding period. Later on, four specific models based on the number of hydrogen bond acceptors, polar surface area, total surface area, and number of acidic oxygen were established for the prediction. The mentioned descriptors can predict the penetration with an acceptable accuracy. For the remaining compounds (N= 147, 158, 160, and 174 for models 1 to 4, respectively) of each model binary regression with SPSS 21 was done in order to give us a model to predict the penetration ratio of compounds. Only structural descriptors with p-value<0.1 remained in the final model. Results and discussion: Four different models based on the number of hydrogen bond acceptors, polar surface area, and total surface area were obtained in order to predict the penetration of drugs into human milk during breastfeeding period About 3-4% of milk consists of lipids, and the amount of lipid after parturition increases. Lipid soluble drugs diffuse alongside with fats from plasma to mammary glands. lipophilicity plays a vital role in predicting the penetration class of drugs during lactation period. It was shown in the logistic regression models that compounds with number of hydrogen bond acceptors, PSA and TSA above 5, 90 and 25 respectively, are less permeable to milk because they are less soluble in the amount of fats in milk. The pH of milk is acidic and due to that, basic compounds tend to be concentrated in milk than plasma while acidic compounds may consist lower concentrations in milk than plasma. Conclusion: In this study, we developed four regression-based models to predict the penetration class of drugs during the lactation period. The obtained models can lead to a higher speed in drug development process, saving energy, and costs. Milk/plasma ratio assessment of drugs requires multiple steps of animal testing, which has its own ethical issues. QSAR modeling could help scientist to reduce the amount of animal testing, and our models are also eligible to do that.

Keywords: logistic regression, breastfeeding, descriptors, penetration

Procedia PDF Downloads 46
3468 Discovering Social Entrepreneurship: A Qualitative Study on Stimulants and Obstacles for Social Entrepreneurs in the Hague

Authors: Loes Nijskens

Abstract:

The city of The Hague is coping with several social issues: high unemployment rates, segregation and environmental pollution. The amount of social enterprises in The Hague that want to tackle these issues is increasing, but no clear image exists of the stimulants and obstacles social entrepreneurs encounter. In this qualitative study 20 starting and established social entrepreneurs, investors and stimulators of social entrepreneurship have been interviewed. The findings indicate that the majority of entrepreneurs situated in The Hague focuses on creating jobs (the so called social nurturers) and diminishing food waste. Moreover, the study found smaller groups of social connectors, (who focus on stimulating the social cohesion in the city) and social traders (who create a market for products from developing countries). For the social nurturers, working together with local government to find people with a distance to the labour market is a challenge. The entrepreneurs are missing a governance approach within the local government, wherein space is provided to develop suitable legislation and projects in cooperation with several stakeholders in order to diminish social problems. All entrepreneurs in the sample face(d) the challenge of having a clear purpose of their business in the beginning. Starting social entrepreneurs tend to be idealistic without having defined a business model. Without a defined business model it is difficult to find proper funding for their business. The more advanced enterprises cope with the challenge of measuring social impact. The larger they grow, the more they have to ‘defend’ themselves towards the local government and their customers, of mainly being social. Hence, the more experienced social nurturers still find it difficult to work together with the local government. They tend to settle their business in other municipalities, where they find more effective public-private partnerships. Al this said, the eco-system for social enterprises in The Hague is on the rise. To stimulate the amount and growth of social enterprises the cooperation between entrepreneurs and local government, the developing of social business models and measuring of impact needs more attention.

Keywords: obstacles, social enterprises, stimulants, the Hague

Procedia PDF Downloads 202
3467 Identification of Text Domains and Register Variation through the Analysis of Lexical Distribution in a Bangla Mass Media Text Corpus

Authors: Mahul Bhattacharyya, Niladri Sekhar Dash

Abstract:

The present research paper is an experimental attempt to investigate the nature of variation in the register in three major text domains, namely, social, cultural, and political texts collected from the corpus of Bangla printed mass media texts. This present study uses a corpus of a moderate amount of Bangla mass media text that contains nearly one million words collected from different media sources like newspapers, magazines, advertisements, periodicals, etc. The analysis of corpus data reveals that each text has certain lexical properties that not only control their identity but also mark their uniqueness across the domains. At first, the subject domains of the texts are classified into two parameters namely, ‘Genre' and 'Text Type'. Next, some empirical investigations are made to understand how the domains vary from each other in terms of lexical properties like both function and content words. Here the method of comparative-cum-contrastive matching of lexical load across domains is invoked through word frequency count to track how domain-specific words and terms may be marked as decisive indicators in the act of specifying the textual contexts and subject domains. The study shows that the common lexical stock that percolates across all text domains are quite dicey in nature as their lexicological identity does not have any bearing in the act of specifying subject domains. Therefore, it becomes necessary for language users to anchor upon certain domain-specific lexical items to recognize a text that belongs to a specific text domain. The eventual findings of this study confirm that texts belonging to different subject domains in Bangla news text corpus clearly differ on the parameters of lexical load, lexical choice, lexical clustering, lexical collocation. In fact, based on these parameters, along with some statistical calculations, it is possible to classify mass media texts into different types to mark their relation with regard to the domains they should actually belong. The advantage of this analysis lies in the proper identification of the linguistic factors which will give language users a better insight into the method they employ in text comprehension, as well as construct a systemic frame for designing text identification strategy for language learners. The availability of huge amount of Bangla media text data is useful for achieving accurate conclusions with a certain amount of reliability and authenticity. This kind of corpus-based analysis is quite relevant for a resource-poor language like Bangla, as no attempt has ever been made to understand how the structure and texture of Bangla mass media texts vary due to certain linguistic and extra-linguistic constraints that are actively operational to specific text domains. Since mass media language is assumed to be the most 'recent representation' of the actual use of the language, this study is expected to show how the Bangla news texts reflect the thoughts of the society and how they leave a strong impact on the thought process of the speech community.

Keywords: Bangla, corpus, discourse, domains, lexical choice, mass media, register, variation

Procedia PDF Downloads 158
3466 Modelling of Exothermic Reactions during Carbon Fibre Manufacturing and Coupling to Surrounding Airflow

Authors: Musa Akdere, Gunnar Seide, Thomas Gries

Abstract:

Carbon fibres are fibrous materials with a carbon atom amount of more than 90%. They combine excellent mechanicals properties with a very low density. Thus carbon fibre reinforced plastics (CFRP) are very often used in lightweight design and construction. The precursor material is usually polyacrylonitrile (PAN) based and wet-spun. During the production of carbon fibre, the precursor has to be stabilized thermally to withstand the high temperatures of up to 1500 °C which occur during carbonization. Even though carbon fibre has been used since the late 1970s in aerospace application, there is still no general method available to find the optimal production parameters and the trial-and-error approach is most often the only resolution. To have a much better insight into the process the chemical reactions during stabilization have to be analyzed particularly. Therefore, a model of the chemical reactions (cyclization, dehydration, and oxidation) based on the research of Dunham and Edie has been developed. With the presented model, it is possible to perform a complete simulation of the fibre undergoing all zones of stabilization. The fiber bundle is modeled as several circular fibers with a layer of air in-between. Two thermal mechanisms are considered to be the most important: the exothermic reactions inside the fiber and the convective heat transfer between the fiber and the air. The exothermic reactions inside the fibers are modeled as a heat source. Differential scanning calorimetry measurements have been performed to estimate the amount of heat of the reactions. To shorten the required time of a simulation, the number of fibers is decreased by similitude theory. Experiments were conducted to validate the simulation results of the fibre temperature during stabilization. The experiments for the validation were conducted on a pilot scale stabilization oven. To measure the fibre bundle temperature, a new measuring method is developed. The comparison of the results shows that the developed simulation model gives good approximations for the temperature profile of the fibre bundle during the stabilization process.

Keywords: carbon fibre, coupled simulation, exothermic reactions, fibre-air-interface

Procedia PDF Downloads 249
3465 The Effects of Ellagic Acid on Rat Heart Induced Tobacco Smoke

Authors: Nalan Kaya, D. Ozlem Dabak, Gonca Ozan, Elif Erdem, Enver Ozan

Abstract:

One of the common causes of cardiovascular disease (CVD) is smoking. Moreover, tobacco smoke decreases the amount of oxygen that the blood can carry and increases the tendency for blood clots. Ellagic acid is a powerful antioxidant found especially in red fruits. It was shown to block atherosclerotic process suppressing oxidative stress and inflammation. The aim of this study was to examine the protective effects of ellagic acid against oxidative damage on heart tissues of rats induced by tobacco smoke. Twenty-four male adult (8 weeks old) Spraque-Dawley rats were divided randomly into 4 equal groups: group I (Control), group II (Tobacco smoke), group III (Tobacco smoke + corn oil) and group IV (Tobacco smoke + ellagic acid). The rats in group II, III and IV, were exposed to tobacco smoke 1 hour twice a day for 12 weeks. In addition to tobacco smoke exposure, 12 mg/kg ellagic acid (dissolved in corn oil), was applied to the rats in group IV by oral gavage. An equal amount of corn oil used in solving ellagic acid was applied to the rats by oral gavage in group III. At the end of the experimental period, rats were decapitated. Heart tissues and blood samples were taken. Histological and biochemical analyzes were performed. Vascular congestion, hyperemic areas, inflammatory cell infiltration and increased connective tissue in the perivascular area were observed in tobacco smoke and tobacco smoke + corn oil groups. Increased connective tissue in the perivascular area, hemorrhage and inflammatory cell infiltration were decreased in tobacco smoke + EA group. Group-II GSH level was not changed (significantly), CAT, SOD, GPx activities were significantly higher than group-I. Compared to group-II, group-IV GSH, SOD, CAT, GPx activities were increased, and MDA level was decreased significantly. Group-II and Group-III levels were similar. The results indicate that ellagic acid could protect the heart tissue from the tobacco smoke harmful effects.

Keywords: ellagic acid, heart, rat, tobacco smoke

Procedia PDF Downloads 200
3464 Imaginal and in Vivo Exposure Blended with Emdr: Becoming Unstuck, an Integrated Inpatient Treatment for Post-Traumatic Stress Disorder

Authors: Merrylord Harb-Azar

Abstract:

Traditionally, PTSD treatment has involved trauma-focused cognitive behaviour therapy (TF CBT) to consolidate traumatic memories. A piloted integrated treatment of TF CBT and eye movement desensitisation reprocessing therapy (EMDR) of eight phases will fasten the rate memory is being consolidated and enhance cognitive functioning in patients with PTSD. Patients spend a considerable amount of time in treatment managing their traumas experienced firsthand, or from aversive details ranging from war, assaults, accidents, abuse, hostage related, riots, or natural disasters. The time spent in treatment or as inpatient affects overall quality of life, relationships, cognitive functioning, and overall sense of identity. EMDR is being offered twice a week in conjunction with the standard prolonged exposure as an inpatient in a private hospital. Prolonged exposure for up to 5 hours per day elicits the affect response required for EMDR sessions in the afternoon to unlock unprocessed memories and facilitate consolidation in the amygdala and hippocampus. Results are indicating faster consolidation of memories, reduction in symptoms in a shorter period of time, reduction in admission time, which is enhancing the quality of life and relationships, and improved cognition. The impact of events scale (IES) results demonstrate a significant reduction in symptoms, trauma symptoms inventory (TSI), and posttraumatic stressor disorder check list (PCL) that demonstrates large effect sizes to date. An integrated treatment approach for PTSD achieves a faster resolution of memories, improves cognition, and reduces the amount of time spent in therapy.

Keywords: EMDR enhances cognitive functioning, faster consolidation of trauma memory, integrated treatment of TF CBT and EMDR, reduction in inpatient admission time

Procedia PDF Downloads 125
3463 Paper Concrete: A Step towards Sustainability

Authors: Hemanth K. Balaga, Prakash Nanthagopalan

Abstract:

Every year a huge amount of paper gets discarded of which only a minute fraction is being recycled and the rest gets dumped as landfills. Paper fibres can be recycled only a limited number of times before they become too short or weak to make high quality recycled paper. This eventually adds to the already big figures of waste paper that is being generated and not recycled. It would be advantageous if this prodigious amount of waste can be utilized as a low-cost sustainable construction material and make it as a value added product. The generic term for the material under investigation is paper-concrete. This is a fibrous mix made of Portland cement, water and pulped paper and/or other aggregates. The advantages of this material include light weight, good heat and sound insulation capability and resistance to flame. The disadvantages include low strength compared to conventional concrete and its hydrophilic nature. The properties vary with the variation of cement and paper content in the mix. In the present study, Portland Pozzolona Cement and news print paper were used for the preparation of paper concrete cubes. Initially, investigations were performed to determine the minimum soaking period required for the softening of the paper fibres. Further different methodologies were explored for proper blending of the pulp with cement paste. The properties of paper concrete vary with the variation of cement to paper to water ratio. The study mainly addresses the parameters of strength and weight loss of the concrete cubes with age and the time that is required for the dry paper fibres to become soft enough in water to bond with the cement. The variation of compressive strength with cement content, water content, and time was studied. The water loss of the cubes with time and the minimum time required for the softening of paper fibres were investigated .Results indicate that the material loses 25-50 percent of the initial weight at the end of 28 days, and a maximum 28 day compressive strength (cubes) of 5.4 Mpa was obtained.

Keywords: soaking time, difference water, minimum water content, maximum water content

Procedia PDF Downloads 235
3462 Shape Management Method of Large Structure Based on Octree Space Partitioning

Authors: Gichun Cha, Changgil Lee, Seunghee Park

Abstract:

The objective of the study is to construct the shape management method contributing to the safety of the large structure. In Korea, the research of the shape management is lack because of the new attempted technology. Terrestrial Laser Scanning (TLS) is used for measurements of large structures. TLS provides an efficient way to actively acquire accurate the point clouds of object surfaces or environments. The point clouds provide a basis for rapid modeling in the industrial automation, architecture, construction or maintenance of the civil infrastructures. TLS produce a huge amount of point clouds. Registration, Extraction and Visualization of data require the processing of a massive amount of scan data. The octree can be applied to the shape management of the large structure because the scan data is reduced in the size but, the data attributes are maintained. The octree space partitioning generates the voxel of 3D space, and the voxel is recursively subdivided into eight sub-voxels. The point cloud of scan data was converted to voxel and sampled. The experimental site is located at Sungkyunkwan University. The scanned structure is the steel-frame bridge. The used TLS is Leica ScanStation C10/C5. The scan data was condensed 92%, and the octree model was constructed with 2 millimeter in resolution. This study presents octree space partitioning for handling the point clouds. The basis is created by shape management of the large structures such as double-deck tunnel, building and bridge. The research will be expected to improve the efficiency of structural health monitoring and maintenance. "This work is financially supported by 'U-City Master and Doctor Course Grant Program' and the National Research Foundation of Korea(NRF) grant funded by the Korea government (MSIP) (NRF- 2015R1D1A1A01059291)."

Keywords: 3D scan data, octree space partitioning, shape management, structural health monitoring, terrestrial laser scanning

Procedia PDF Downloads 278
3461 From Binary Solutions to Real Bio-Oils: A Multi-Step Extraction Story of Phenolic Compounds with Ionic Liquid

Authors: L. Cesari, L. Canabady-Rochelle, F. Mutelet

Abstract:

The thermal conversion of lignin produces bio-oils that contain many compounds with high added-value such as phenolic compounds. In order to efficiently extract these compounds, the possible use of choline bis(trifluoromethylsulfonyl)imide [Choline][NTf2] ionic liquid was explored. To this end, a multistep approach was implemented. First, binary (phenolic compound and solvent) and ternary (phenolic compound and solvent and ionic liquid) solutions were investigated. Eight binary systems of phenolic compound and water were investigated at atmospheric pressure. These systems were quantified using the turbidity method and UV-spectroscopy. Ternary systems (phenolic compound and water and [Choline][NTf2]) were investigated at room temperature and atmospheric pressure. After stirring, the solutions were let to settle down, and a sample of each phase was collected. The analysis of the phases was performed using gas chromatography with an internal standard. These results were used to quantify the values of the interaction parameters of thermodynamic models. Then, extractions were performed on synthetic solutions to determine the influence of several operating conditions (temperature, kinetics, amount of [Choline][NTf2]). With this knowledge, it has been possible to design and simulate an extraction process composed of one extraction column and one flash. Finally, the extraction efficiency of [Choline][NTf2] was quantified with real bio-oils from lignin pyrolysis. Qualitative and quantitative analysis were performed using gas chromatographic connected to mass spectroscopy and flame ionization detector. The experimental measurements show that the extraction of phenolic compounds is efficient at room temperature, quick and does not require a high amount of [Choline][NTf2]. Moreover, the simulations of the extraction process demonstrate that [Choline][NTf2] process requires less energy than an organic one. Finally, the efficiency of [Choline][NTf2] was confirmed in real situations with the experiments on lignin pyrolysis bio-oils.

Keywords: bio-oils, extraction, lignin, phenolic compounds

Procedia PDF Downloads 88
3460 Failure Analysis of Recoiler Mandrel Shaft Used for Coiling of Rolled Steel Sheet

Authors: Sachin Pawar, Suman Patra, Goutam Mukhopadhyay

Abstract:

The primary function of a shaft is to transfer power. The shaft can be cast or forged and then machined to the final shape. Manufacturing of ~5 m length and 0.6 m diameter shaft is very critical. More difficult is to maintain its straightness during heat treatment and machining operations, which involve thermal and mechanical loads, respectively. During the machining operation of a such forged mandrel shaft, a deflection of 3-4mm was observed. To remove this deflection shaft was pressed at both ends which led to the development of cracks in it. To investigate the root cause of the deflection and cracking, the sample was cut from the failed shaft. Possible causes were identified with the help of a cause and effect diagram. Chemical composition analysis, microstructural analysis, and hardness measurement were done to confirm whether the shaft meets the required specifications or not. Chemical composition analysis confirmed that the material grade was 42CrMo4. Microstructural analysis revealed the presence of untempered martensite, indicating improper heat treatment. Due to this, ductility and impact toughness values were considerably lower than the specification of the mentioned grade. Residual stress measurement of one more bent shaft manufactured by a similar route was done by portable X-ray diffraction(XRD) technique. For better understanding, measurements were done at twelve different locations along the length of the shaft. The occurrence of a high amount of undesirable tensile residual stresses close to the Ultimate Tensile Strength(UTS) of the material was observed. Untempered martensitic structure, lower ductility, lower impact strength, and presence of a high amount of residual stresses all confirmed the improper tempering heat treatment of the shaft. Tempering relieves the residual stresses. Based on the findings of this study, stress-relieving heat treatment was done to remove the residual stresses and deflection in the shaft successfully.

Keywords: residual stress, mandrel shaft, untempered martensite, portable XRD

Procedia PDF Downloads 97
3459 Synthesis of Double Dye-Doped Silica Nanoparticles and Its Application in Paper-Based Chromatography

Authors: Ka Ho Yau, Jan Frederick Engels, Kwok Kei Lai, Reinhard Renneberg

Abstract:

Lateral flow test is a prevalent technology in various sectors such as food, pharmacology and biomedical sciences. Colloidal gold (CG) is widely used as the signalling molecule because of the ease of synthesis, bimolecular conjugation and its red colour due to intrinsic SPRE. However, the production of colloidal gold is costly and requires vigorous conditions. The stability of colloidal gold are easily affected by environmental factors such as pH, high salt content etc. Silica nanoparticles are well known for its ease of production and stability over a wide range of solvents. Using reverse micro-emulsion (w/o), silica nanoparticles with different sizes can be produced precisely by controlling the amount of water. By incorporating different water-soluble dyes, a rainbow colour of the silica nanoparticles could be produced. Conjugation with biomolecules such as antibodies can be achieved after surface modification of the silica nanoparticles with organosilane. The optimum amount of the antibodies to be labelled was determined by Bradford Assay. In this work, we have demonstrated the ability of the dye-doped silica nanoparticles as a signalling molecule in lateral flow test, which showed a semi-quantitative measurement of the analyte. The image was further analysed for the LOD=10 ng of the analyte. The working range and the linear range of the test were from 0 to 2.15μg/mL and from 0 to 1.07 μg/mL (R2=0.988) respectively. The performance of the tests was comparable to those using colloidal gold with the advantages of lower cost, enhanced stability and having a wide spectrum of colours. The positives lines can be imaged by naked eye or by using a mobile phone camera for a better quantification. Further research has been carried out in multicolour detection of different biomarkers simultaneously. The preliminary results were promising as there was little cross-reactivity being observed for an optimized system. This approach provides a platform for multicolour detection for a set of biomarkers that enhances the accuracy of diseases diagnostics.

Keywords: colorimetric detection, immunosensor, paper-based biosensor, silica

Procedia PDF Downloads 365
3458 The Influence of Bentonite on the Rheology of Geothermal Grouts

Authors: A. N. Ghafar, O. A. Chaudhari, W. Oettel, P. Fontana

Abstract:

This study is a part of the EU project GEOCOND-Advanced materials and processes to improve performance and cost-efficiency of shallow geothermal systems and underground thermal storage. In heat exchange boreholes, to improve the heat transfer between the pipes and the surrounding ground, the space between the pipes and the borehole wall is normally filled with geothermal grout. Traditionally, bentonite has been a crucial component in most commercially available geothermal grouts to assure the required stability and impermeability. The investigations conducted in the early stage of this project during the benchmarking tests on some commercial grouts showed considerable sensitivity of the rheological properties of the tested grouts to the mixing parameters, i.e., mixing time and velocity. Further studies on this matter showed that bentonite, which has been one of the important constituents in most grout mixes, was probably responsible for such behavior. Apparently, proper amount of shear should be applied during the mixing process to sufficiently activate the bentonite. The higher the amount of applied shear the more the activation of bentonite, resulting in change in the grout rheology. This explains why, occasionally in the field applications, the flow properties of the commercially available geothermal grouts using different mixing conditions (mixer type, mixing time, mixing velocity) are completely different than expected. A series of tests were conducted on the grout mixes, with and without bentonite, using different mixing protocols. The aim was to eliminate/reduce the sensitivity of the rheological properties of the geothermal grouts to the mixing parameters by replacing bentonite with polymeric (non-clay) stabilizers. The results showed that by replacing bentonite with a proper polymeric stabilizer, the sensitivity of the grout mix on mixing time and velocity was to a great extent diminished. This can be considered as an alternative for the developers/producers of geothermal grouts to provide enhanced materials with less uncertainty in obtained results in the field applications.

Keywords: flow properties, geothermal grout, mixing time, mixing velocity, rheological properties

Procedia PDF Downloads 109
3457 Effect of Cement Amount on California Bearing Ratio Values of Different Soil

Authors: Ayse Pekrioglu Balkis, Sawash Mecid

Abstract:

Due to continued growth and rapid development of road construction in worldwide, road sub-layers consist of soil layers, therefore, identification and recognition of type of soil and soil behavior in different condition help to us to select soil according to specification and engineering characteristic, also if necessary sometimes stabilize the soil and treat undesirable properties of soils by adding materials such as bitumen, lime, cement, etc. If the soil beneath the road is not done according to the standards and construction will need more construction time. In this case, a large part of soil should be removed, transported and sometimes deposited. Then purchased sand and gravel is transported to the site and full depth filled and compacted. Stabilization by cement or other treats gives an opportunity to use the existing soil as a base material instead of removing it and purchasing and transporting better fill materials. Classification of soil according to AASHTOO system and USCS help engineers to anticipate soil behavior and select best treatment method. In this study soil classification and the relation between soil classification and stabilization method is discussed, cement stabilization with different percentages have been selected for soil treatment based on NCHRP. There are different parameters to define the strength of soil. In this study, CBR will be used to define the strength of soil. Cement by percentages, 0%, 3%, 7% and 10% added to soil for evaluation effect of added cement to CBR of treated soil. Implementation of stabilization process by different cement content help engineers to select an economic cement amount for the stabilization process according to project specification and characteristics. Stabilization process in optimum moisture content (OMC) and mixing rate effect on the strength of soil in the laboratory and field construction operation have been performed to see the improvement rate in strength and plasticity. Cement stabilization is quicker than a universal method such as removing and changing field soils. Cement addition increases CBR values of different soil types by the range of 22-69%.

Keywords: California Bearing Ratio, cement stabilization, clayey soil, mechanical properties

Procedia PDF Downloads 374
3456 3D Numerical Modelling of a Pulsed Pumping Process of a Large Dense Non-Aqueous Phase Liquid Pool: In situ Pilot-Scale Case Study of Hexachlorobutadiene in a Keyed Enclosure

Authors: Q. Giraud, J. Gonçalvès, B. Paris

Abstract:

Remediation of dense non-aqueous phase liquids (DNAPLs) represents a challenging issue because of their persistent behaviour in the environment. This pilot-scale study investigates, by means of in situ experiments and numerical modelling, the feasibility of the pulsed pumping process of a large amount of a DNAPL in an alluvial aquifer. The main compound of the DNAPL is hexachlorobutadiene, an emerging organic pollutant. A low-permeability keyed enclosure was built at the location of the DNAPL source zone in order to isolate a finite undisturbed volume of soil, and a 3-month pulsed pumping process was applied inside the enclosure to exclusively extract the DNAPL. The water/DNAPL interface elevation at both the pumping and observation wells and the cumulated pumped volume of DNAPL were also recorded. A total volume of about 20m³ of purely DNAPL was recovered since no water was extracted during the process. The three-dimensional and multiphase flow simulator TMVOC was used, and a conceptual model was elaborated and generated with the pre/post-processing tool mView. Numerical model consisted of 10 layers of variable thickness and 5060 grid cells. Numerical simulations reproduce the pulsed pumping process and show an excellent match between simulated, and field data of DNAPL cumulated pumped volume and a reasonable agreement between modelled and observed data for the evolution of the water/DNAPL interface elevations at the two wells. This study offers a new perspective in remediation since DNAPL pumping system optimisation may be performed where a large amount of DNAPL is encountered.

Keywords: dense non-aqueous phase liquid (DNAPL), hexachlorobutadiene, in situ pulsed pumping, multiphase flow, numerical modelling, porous media

Procedia PDF Downloads 158
3455 Single-Molecule Optical Study of Cholesterol-Mediated Dimerization Process of EGFRs in Different Cell Lines

Authors: Chien Y. Lin, Jung Y. Huang, Leu-Wei Lo

Abstract:

A growing body of data reveals that the membrane cholesterol molecules can alter the signaling pathways of living cells. However, the understanding about how membrane cholesterol modulates receptor proteins is still lacking. Single-molecule tracking can effectively probe into the microscopic environments and thermal fluctuations of receptor proteins in a living cell. In this study we applies single-molecule optical tracking on ligand-induced dimerization process of EGFRs in the plasma membranes of two cancer cell lines (HeLa and A431) and one normal endothelial cell line (MCF12A). We tracked individual EGFR and dual receptors, diffusing in a correlated manner in the plasma membranes of live cells. We developed an energetic model by integrating the generalized Langevin equation with the Cahn-Hilliard equation to help extracting important information from single-molecule trajectories. From the study, we discovered that ligand-bound EGFRs move from non-raft areas into lipid raft domains. This ligand-induced motion is a common behavior in both cancer and normal cells. By manipulating the total amount of membrane cholesterol with methyl-β-cyclodextrin and the local concentration of membrane cholesterol with nystatin, we further found that the amount of cholesterol can affect the stability of EGFR dimers. The EGFR dimers in the plasma membrane of normal cells are more sensitive to the local concentration changes of cholesterol than EGFR dimers in the cancer cells. Our method successfully captures dynamic interactions of receptors at the single-molecule level and provides insight into the functional architecture of both the diffusing EGFR molecules and their local cellular environment.

Keywords: membrane proteins, single-molecule tracking, Cahn-Hilliard equation, EGFR dimers

Procedia PDF Downloads 393
3454 Design and Fabrication of Piezoelectric Tactile Sensor by Deposition of PVDF-TrFE with Spin-Coating Method for Minimally Invasive Surgery

Authors: Saman Namvarrechi, Armin A. Dormeny, Javad Dargahi, Mojtaba Kahrizi

Abstract:

Since last two decades, minimally invasive surgery (MIS) has grown significantly due to its advantages compared to the traditional open surgery like less physical pain, faster recovery time and better healing condition around incision regions; however, one of the important challenges in MIS is getting an effective sensing feedback within the patient’s body during operations. Therefore, surgeons need efficient tactile sensing like determining the hardness of contact tissue for investigating the patient’s health condition. In such a case, MIS tactile sensors are preferred to be able to provide force/pressure sensing, force position, lump detection, and softness sensing. Among different pressure sensor technologies, the piezoelectric operating principle is the fittest for MIS’s instruments, such as catheters. Using PVDF with its copolymer, TrFE, as a piezoelectric material, is a common method of design and fabrication of a tactile sensor due to its ease of implantation and biocompatibility. In this research, PVDF-TrFE polymer is deposited via spin-coating method and treated with various post-deposition processes to investigate its piezoelectricity and amount of electroactive β phase. These processes include different post thermal annealing, the effect of spin-coating speed, different layer of deposition, and the presence of additional hydrate salt. According to FTIR spectroscopy and SEM images, the amount of the β phase and porosity of each sample is determined. In addition, the optimum experimental study is established by considering every aspect of the fabrication process. This study clearly shows the effective way of deposition and fabrication of a tactile PVDF-TrFE based sensor and an enhancement methodology to have a higher β phase and piezoelectric constant in order to have a better sense of touch at the end effector of biomedical devices.

Keywords: β phase, minimally invasive surgery, piezoelectricity, PVDF-TrFE, tactile sensor

Procedia PDF Downloads 103
3453 Green Crypto Mining: A Quantitative Analysis of the Profitability of Bitcoin Mining Using Excess Wind Energy

Authors: John Dorrell, Matthew Ambrosia, Abilash

Abstract:

This paper employs econometric analysis to quantify the potential profit wind farms can receive by allocating excess wind energy to power bitcoin mining machines. Cryptocurrency mining consumes a substantial amount of electricity worldwide, and wind energy produces a significant amount of energy that is lost because of the intermittent nature of the resource. Supply does not always match consumer demand. By combining the weaknesses of these two technologies, we can improve efficiency and a sustainable path to mine cryptocurrencies. This paper uses historical wind energy from the ERCOT network in Texas and cryptocurrency data from 2000-2021, to create 4-year return on investment projections. Our research model incorporates the price of bitcoin, the price of the miner, the hash rate of the miner relative to the network hash rate, the block reward, the bitcoin transaction fees awarded to the miners, the mining pool fees, the cost of the electricity and the percentage of time the miner will be running to demonstrate that wind farms generate enough excess energy to mine bitcoin profitably. Excess wind energy can be used as a financial battery, which can utilize wasted electricity by changing it into economic energy. The findings of our research determine that wind energy producers can earn profit while not taking away much if any, electricity from the grid. According to our results, Bitcoin mining could give as much as 1347% and 805% return on investment with the starting dates of November 1, 2021, and November 1, 2022, respectively, using wind farm curtailment. This paper is helpful to policymakers and investors in determining efficient and sustainable ways to power our economic future. This paper proposes a practical solution for the problem of crypto mining energy consumption and creates a more sustainable energy future for Bitcoin.

Keywords: bitcoin, mining, economics, energy

Procedia PDF Downloads 6
3452 Phase Composition Analysis of Ternary Alloy Materials for Gas Turbine Applications

Authors: Mayandi Ramanathan

Abstract:

Gas turbine blades see the most aggressive thermal stress conditions within the engine, due to high Turbine Entry Temperatures in the range of 1500 to 1600°C. The blades rotate at very high rotation rates and remove a significant amount of thermal power from the gas stream. At high temperatures, the major component failure mechanism is a creep. During its service over time under high thermal loads, the blade will deform, lengthen and rupture. High strength and stiffness in the longitudinal direction up to elevated service temperatures are certainly the most needed properties of turbine blades and gas turbine components. The proposed advanced Ti alloy material needs a process that provides a strategic orientation of metallic ordering, uniformity in composition and high metallic strength. The chemical composition of the proposed Ti alloy material (25% Ta/(Al+Ta) ratio), unlike Ti-47Al-2Cr-2Nb, has less excess Al that could limit the service life of turbine blades. Properties and performance of Ti-47Al-2Cr-2Nb and Ti-6Al-4V materials will be compared with that of the proposed Ti alloy material to generalize the performance metrics of various gas turbine components. This paper will involve the summary of the effects of additive manufacturing and heat treatment process conditions on the changes in the phase composition, grain structure, lattice structure of the material, tensile strength, creep strain rate, thermal expansion coefficient and fracture toughness at different temperatures. Based on these results, additive manufacturing and heat treatment process conditions will be optimized to fabricate turbine blade with Ti-43Al matrix alloyed with an optimized amount of refractory Ta metal. Improvement in service temperature of the turbine blades and corrosion resistance dependence on the coercivity of the alloy material will be reported. A correlation of phase composition and creep strain rate will also be discussed.

Keywords: high temperature materials, aerospace, specific strength, creep strain, phase composition

Procedia PDF Downloads 88
3451 Destination Management Organization in the Digital Era: A Data Framework to Leverage Collective Intelligence

Authors: Alfredo Fortunato, Carmelofrancesco Origlia, Sara Laurita, Rossella Nicoletti

Abstract:

In the post-pandemic recovery phase of tourism, the role of a Destination Management Organization (DMO) as a coordinated management system of all the elements that make up a destination (attractions, access, marketing, human resources, brand, pricing, etc.) is also becoming relevant for local territories. The objective of a DMO is to maximize the visitor's perception of value and quality while ensuring the competitiveness and sustainability of the destination, as well as the long-term preservation of its natural and cultural assets, and to catalyze benefits for the local economy and residents. In carrying out the multiple functions to which it is called, the DMO can leverage a collective intelligence that comes from the ability to pool information, explicit and tacit knowledge, and relationships of the various stakeholders: policymakers, public managers and officials, entrepreneurs in the tourism supply chain, researchers, data journalists, schools, associations and committees, citizens, etc. The DMO potentially has at its disposal large volumes of data and many of them at low cost, that need to be properly processed to produce value. Based on these assumptions, the paper presents a conceptual framework for building an information system to support the DMO in the intelligent management of a tourist destination tested in an area of southern Italy. The approach adopted is data-informed and consists of four phases: (1) formulation of the knowledge problem (analysis of policy documents and industry reports; focus groups and co-design with stakeholders; definition of information needs and key questions); (2) research and metadatation of relevant sources (reconnaissance of official sources, administrative archives and internal DMO sources); (3) gap analysis and identification of unconventional information sources (evaluation of traditional sources with respect to the level of consistency with information needs, the freshness of information and granularity of data; enrichment of the information base by identifying and studying web sources such as Wikipedia, Google Trends, Booking.com, Tripadvisor, websites of accommodation facilities and online newspapers); (4) definition of the set of indicators and construction of the information base (specific definition of indicators and procedures for data acquisition, transformation, and analysis). The framework derived consists of 6 thematic areas (accommodation supply, cultural heritage, flows, value, sustainability, and enabling factors), each of which is divided into three domains that gather a specific information need to be represented by a scheme of questions to be answered through the analysis of available indicators. The framework is characterized by a high degree of flexibility in the European context, given that it can be customized for each destination by adapting the part related to internal sources. Application to the case study led to the creation of a decision support system that allows: •integration of data from heterogeneous sources, including through the execution of automated web crawling procedures for data ingestion of social and web information; •reading and interpretation of data and metadata through guided navigation paths in the key of digital story-telling; •implementation of complex analysis capabilities through the use of data mining algorithms such as for the prediction of tourist flows.

Keywords: collective intelligence, data framework, destination management, smart tourism

Procedia PDF Downloads 101
3450 Application of Stochastic Models on the Portuguese Population and Distortion to Workers Compensation Pensioners Experience

Authors: Nkwenti Mbelli Njah

Abstract:

This research was motivated by a project requested by AXA on the topic of pensions payable under the workers compensation (WC) line of business. There are two types of pensions: the compulsorily recoverable and the not compulsorily recoverable. A pension is compulsorily recoverable for a victim when there is less than 30% of disability and the pension amount per year is less than six times the minimal national salary. The law defines that the mathematical provisions for compulsory recoverable pensions must be calculated by applying the following bases: mortality table TD88/90 and rate of interest 5.25% (maybe with rate of management). To manage pensions which are not compulsorily recoverable is a more complex task because technical bases are not defined by law and much more complex computations are required. In particular, companies have to predict the amount of payments discounted reflecting the mortality effect for all pensioners (this task is monitored monthly in AXA). The purpose of this research was thus to develop a stochastic model for the future mortality of the worker’s compensation pensioners of both the Portuguese market workers and AXA portfolio. Not only is past mortality modeled, also projections about future mortality are made for the general population of Portugal as well as for the two portfolios mentioned earlier. The global model was split in two parts: a stochastic model for population mortality which allows for forecasts, combined with a point estimate from a portfolio mortality model obtained through three different relational models (Cox Proportional, Brass Linear and Workgroup PLT). The one-year death probabilities for ages 0-110 for the period 2013-2113 are obtained for the general population and the portfolios. These probabilities are used to compute different life table functions as well as the not compulsorily recoverable reserves for each of the models required for the pensioners, their spouses and children under 21. The results obtained are compared with the not compulsory recoverable reserves computed using the static mortality table (TD 73/77) that is currently being used by AXA, to see the impact on this reserve if AXA adopted the dynamic tables.

Keywords: compulsorily recoverable, life table functions, relational models, worker’s compensation pensioners

Procedia PDF Downloads 144
3449 Effect of Lithium Bromide Concentration on the Structure and Performance of Polyvinylidene Fluoride (PVDF) Membrane for Wastewater Treatment

Authors: Poojan Kothari, Yash Madhani, Chayan Jani, Bharti Saini

Abstract:

The requirements for quality drinking and industrial water are increasing and water resources are depleting. Moreover large amount of wastewater is being generated and dumped into water bodies without treatment. These have made improvement in water treatment efficiency and its reuse, an important agenda. Membrane technology for wastewater treatment is an advanced process and has become increasingly popular in past few decades. There are many traditional methods for tertiary treatment such as chemical coagulation, adsorption, etc. However recent developments in membrane technology field have led to manufacturing of better quality membranes at reduced costs. This along with the high costs of conventional treatment processes, high separation efficiency and relative simplicity of the membrane treatment process has made it an economically viable option for municipal and industrial purposes. Ultrafiltration polymeric membranes can be used for wastewater treatment and drinking water applications. The proposed work focuses on preparation of one such UF membrane - Polyvinylidene fluoride (PVDF) doped with LiBr for wastewater treatment. Majorly all polymeric membranes are hydrophobic in nature. This property leads to repulsion of water and hence solute particles occupy the pores, decreasing the lifetime of a membrane. Thus modification of membrane through addition of small amount of salt such as LiBr helped us attain certain characteristics of membrane, which can then be used for wastewater treatment. The membrane characteristics are investigated through measuring its various properties such as porosity, contact angle and wettability to find out the hydrophilic nature of the membrane and morphology (surface as well as structure). Pure water flux, solute rejection and permeability of membrane is determined by permeation experiments. A study of membrane characteristics with various concentration of LiBr helped us to compare its effectivity.

Keywords: Lithium bromide (LiBr), morphology, permeability, Polyvinylidene fluoride (PVDF), solute rejection, wastewater treatment

Procedia PDF Downloads 128
3448 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging

Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland

Abstract:

A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.

Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography

Procedia PDF Downloads 137
3447 Evaluation of Functional Properties of Protein Hydrolysate from the Fresh Water Mussel Lamellidens marginalis for Nutraceutical Therapy

Authors: Jana Chakrabarti, Madhushrita Das, Ankhi Haldar, Roshni Chatterjee, Tanmoy Dey, Pubali Dhar

Abstract:

High incidences of Protein Energy Malnutrition as a consequence of low protein intake are quite prevalent among the children in developing countries. Thus prevention of under-nutrition has emerged as a critical challenge to India’s developmental Planners in recent times. Increase in population over the last decade has led to greater pressure on the existing animal protein sources. But these resources are currently declining due to persistent drought, diseases, natural disasters, high-cost of feed, and low productivity of local breeds and this decline in productivity is most evident in some developing countries. So the need of the hour is to search for efficient utilization of unconventional low-cost animal protein resources. Molluscs, as a group is regarded as under-exploited source of health-benefit molecules. Bivalve is the second largest class of phylum Mollusca. Annual harvests of bivalves for human consumption represent about 5% by weight of the total world harvest of aquatic resources. The freshwater mussel Lamellidens marginalis is widely distributed in ponds and large bodies of perennial waters in the Indian sub-continent and well accepted as food all over India. Moreover, ethno-medicinal uses of the flesh of Lamellidens among the rural people to treat hypertension have been documented. Present investigation thus attempts to evaluate the potential of Lamellidens marginalis as functional food. Mussels were collected from freshwater ponds and brought to the laboratory two days before experimentation for acclimatization in laboratory conditions. Shells were removed and fleshes were preserved at- 20oC until analysis. Tissue homogenate was prepared for proximate studies. Fatty acids and amino acids composition were analyzed. Vitamins, Minerals and Heavy metal contents were also studied. Mussel Protein hydrolysate was prepared using Alcalase 2.4 L and degree of hydrolysis was evaluated to analyze its Functional properties. Ferric Reducing Antioxidant Power (FRAP) and DPPH Antioxidant assays were performed. Anti-hypertensive property was evaluated by measuring Angiotensin Converting Enzyme (ACE) inhibition assay. Proximate analysis indicates that mussel meat contains moderate amount of protein (8.30±0.67%), carbohydrate (8.01±0.38%) and reducing sugar (4.75±0.07%), but less amount of fat (1.02±0.20%). Moisture content is quite high but ash content is very low. Phospholipid content is significantly high (19.43 %). Lipid constitutes, substantial amount of eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) which have proven prophylactic values. Trace elements are found present in substantial amount. Comparative study of proximate nutrients between Labeo rohita, Lamellidens and cow’s milk indicates that mussel meat can be used as complementary food source. Functionality analyses of protein hydrolysate show increase in Fat absorption, Emulsification, Foaming capacity and Protein solubility. Progressive anti-oxidant and anti-hypertensive properties have also been documented. Lamellidens marginalis can thus be regarded as a functional food source as this may combine effectively with other food components for providing essential elements to the body. Moreover, mussel protein hydrolysate provides opportunities for utilizing it in various food formulations and pharmaceuticals. The observations presented herein should be viewed as a prelude to what future holds.

Keywords: functional food, functional properties, Lamellidens marginalis, protein hydrolysate

Procedia PDF Downloads 404
3446 A Life Cycle Assessment of Greenhouse Gas Emissions from the Traditional and Climate-smart Farming: A Case of Dhanusha District, Nepal

Authors: Arun Dhakal, Geoff Cockfield

Abstract:

This paper examines the emission potential of different farming practices that the farmers have adopted in Dhanusha District of Nepal and scope of these practices in climate change mitigation. Which practice is more climate-smarter is the question that this aims to address through a life cycle assessment (LCA) of greenhouse gas (GHG) emissions. The LCA was performed to assess if there is difference in emission potential of broadly two farming systems (agroforestry–based and traditional agriculture) but specifically four farming systems. The required data for this was collected through household survey of randomly selected households of 200. The sources of emissions across the farming systems were paddy cultivation, livestock, chemical fertilizer, fossil fuels and biomass (fuel-wood and crop residue) burning. However, the amount of emission from these sources varied with farming system adopted. Emissions from biomass burning appeared to be the highest while the source ‘fossil fuel’ caused the lowest emission in all systems. The emissions decreased gradually from agriculture towards the highly integrated agroforestry-based farming system (HIS), indicating that integrating trees into farming system not only sequester more carbon but also help in reducing emissions from the system. The annual emissions for HIS, Medium integrated agroforestry-based farming system (MIS), LIS (less integrated agroforestry-based farming system and subsistence agricultural system (SAS) were 6.67 t ha-1, 8.62 t ha-1, 10.75 t ha-1 and 17.85 t ha-1 respectively. In one agroforestry cycle, the HIS, MIS and LIS released 64%, 52% and 40% less GHG emission than that of SAS. Within agroforestry-based farming systems, the HIS produced 25% and 50% less emissions than those of MIS and LIS respectively. Our finding suggests that a tree-based farming system is more climate-smarter than a traditional farming. If other two benefits (carbon sequestered within the farm and in the natural forest because of agroforestry) are to be considered, a considerable amount of emissions is reduced from a climate-smart farming. Some policy intervention is required to motivate farmers towards adopting such climate-friendly farming practices in developing countries.

Keywords: life cycle assessment, greenhouse gas, climate change, farming systems, Nepal

Procedia PDF Downloads 592
3445 An Analysis on Aid for Migrants: A Descriptive Analysis on Official Development Assistance During the Migration Crisis

Authors: Elena Masi, Adolfo Morrone

Abstract:

Migration has recently become a mainstream development sector and is currently at the forefront in institutional and civil society context. However, no consensus exists on how the link between migration and development operates, that is how development is related to migration and how migration can promote development. On one hand, Official Development Assistance is recognized to be one of the levers to development. On the other hand, the debate is focusing on what should be the scope of aid programs targeting migrants groups and in general the migration process. This paper provides a descriptive analysis on how development aid for migration was allocated in the recent past, focusing on the actions that were funded and implemented by the international donor community. In the absence of an internationally shared methodology for defining the boundaries of development aid on migration, the analysis based on lexical hypotheses on the title or on the short description of initiatives funded by several Organization for Economic Co-operation and Development (OECD) countries. Moreover, the research describes and quantifies aid flows for each country according to different criteria. The terms migrant and refugee are used to identify the projects in accordance with the most internationally agreed definitions and only actions in countries of transit or of origin are considered eligible, thus excluding the amount sustained for refugees in donor countries. The results show that the percentage of projects targeting migrants, in terms of amount, has followed a growing trend from 2009 to 2016 in several European countries, and is positively correlated with the flows of migrants. Distinguishing between programs targeting migrants and programs targeting refugees, some specific national features emerge more clearly. A focus is devoted to actions targeting the root causes of migration, showing an inter-sectoral approach in international aid allocation. The analysis gives some tentative solutions to the lack of consensus on language on migration and development aid, and emphasizes the need to internationally agree on a criterion for identifying programs targeting both migrants and refugees, to make action more transparent and in order to develop effective strategies at the global level.

Keywords: migration, official development assistance, ODA, refugees, time series

Procedia PDF Downloads 113
3444 Bioremediation Influence on Shear Strength of Contaminated Soils

Authors: Tawar Mahmoodzadeh

Abstract:

Today soil contamination is an unavoidable issue; Irrespective of environmental impact, which happens during the soil contaminating and remediating process, the influence of this phenomenon on soil has not been searched thoroughly. In this study, unconfined compression and compaction tests were done on samples, contaminated and treated soil after 50 days of bio-treatment. The results show that rising in the amount of oil, cause decreased optimum water content and maximum dry density and increased strength. However, almost 65% of this contamination terminated by using a Bioremer as a bioremediation agent.

Keywords: oil contamination soil, shear strength, compaction, bioremediation

Procedia PDF Downloads 129
3443 Synthesis of TiO₂/Graphene Nanocomposites with Excellent Visible-Light Photocatalytic Activity Based on Chemical Exfoliation Method

Authors: Nhan N. T. Ton, Anh T. N. Dao, Kouichirou Katou, Toshiaki Taniike

Abstract:

Facile electron-hole recombination and the broad band gap are two major drawbacks of titanium dioxide (TiO₂) when applied in visible-light photocatalysis. Hybridization of TiO₂ with graphene is a promising strategy to lessen these pitfalls. Recently, there have been many reports on the synthesis of TiO₂/graphene nanocomposites, in most of which graphene oxide (GO) was used as a starting material. However, the reduction of GO introduced a large number of defects on the graphene framework. In addition, the sensitivity of titanium alkoxide to water (GO usually contains) significantly obstructs the uniform and controlled growth of TiO₂ on graphene. Here, we demonstrate a novel technique to synthesize TiO₂/graphene nanocomposites without the use of GO. Graphene dispersion was obtained through the chemical exfoliation of graphite in titanium tetra-n-butoxide with the aid of ultrasonication. The dispersion was directly used for the sol-gel reaction in the presence of different catalysts. A TiO₂/reduced graphene oxide (TiO₂/rGO) nanocomposite, which was prepared by a solvothermal method from GO, and the commercial TiO₂-P25 were used as references. It was found that titanium alkoxide afforded the graphene dispersion of a high quality in terms of a trace amount of defects and a few layers of dispersed graphene. Moreover, the sol-gel reaction from this dispersion led to TiO₂/graphene nanocomposites featured with promising characteristics for visible-light photocatalysts including: (I) the formation of a TiO₂ nano layer (thickness ranging from 1 nm to 5 nm) that uniformly and thinly covered graphene sheets, (II) a trace amount of defects on the graphene framework (low ID/IG ratio: 0.21), (III) a significant extension of the absorption edge into the visible light region (a remarkable extension of the absorption edge to 578 nm beside the usual edge at 360 nm), and (IV) a dramatic suppression of electron-hole recombination (the lowest photoluminescence intensity compared to reference samples). These advantages were successfully demonstrated in the photocatalytic decomposition of methylene blue under visible light irradiation. The TiO₂/graphene nanocomposites exhibited 15 and 5 times higher activity than TiO₂-P25 and the TiO₂/rGO nanocomposite, respectively.

Keywords: chemical exfoliation, photocatalyst, TiO₂/graphene, sol-gel reaction

Procedia PDF Downloads 138