Search results for: reduced order macro models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22871

Search results for: reduced order macro models

4571 Impact of Agriculture on the Groundwater Quality: Case of the Alluvial Plain of Nil River (North-Eastern Algerian)

Authors: S. Benessam, T. H. Debieche, A. Drouiche, F. Zahi, S. Mahdid

Abstract:

The intensive use of the chemical fertilizers and the pesticides in agriculture often produces a contamination of the groundwater by organic pollutants. The irrigation and/or rainwater transport the pollutants towards groundwater or water surface. Among these pollutants, one finds the nitrogen, often observed in the agricultural zones in the nitrate form. In order to understand the form and chemical mobility of nitrogen in groundwater, this study was conducted. A two-monthly monitoring of the parameters physicochemical and chemistry of water of the alluvial plain of Nil river (North-eastern Algerian) were carried out during the period from November 2013 to January 2015 as well as an in-situ investigation of the various chemical products used by the farmers. The results show a raise concentration of nitrates in the wells (depth < 20 m) of the plain, which the concentrations arrive at 50 mg/L (standard of potable water). On the other hand in drillings (depth > 20 m), one observes two behaviors. The first in the upstream part, where the aquifer is unconfined and the medium is oxidizing, one observes the weak nitrate concentrations, indicating its absorption by the ground during the infiltration of water towards the groundwater. The second in the central and downstream parts, where the groundwater is locally confined and the reducing medium, one observes an absence of nitrates and the appearance of nitrites and ammonium, indicating the reduction of nitrates. The projection of the analyses on diagrams Eh-pH of nitrogen has enabled to us to determine the intervals of variation of the nitrogen forms. This study also highlighted the effect of the rains, the pumping and the nature of the geological formations in the form and the mobility of nitrogen in the plain.

Keywords: groundwater, nitrogen, mobility, speciation

Procedia PDF Downloads 260
4570 Computerized Adaptive Testing for Ipsative Tests with Multidimensional Pairwise-Comparison Items

Authors: Wen-Chung Wang, Xue-Lan Qiu

Abstract:

Ipsative tests have been widely used in vocational and career counseling (e.g., the Jackson Vocational Interest Survey). Pairwise-comparison items are a typical item format of ipsative tests. When the two statements in a pairwise-comparison item measure two different constructs, the item is referred to as a multidimensional pairwise-comparison (MPC) item. A typical MPC item would be: Which activity do you prefer? (A) playing with young children, or (B) working with tools and machines. These two statements aim at the constructs of social interest and investigative interest, respectively. Recently, new item response theory (IRT) models for ipsative tests with MPC items have been developed. Among them, the Rasch ipsative model (RIM) deserves special attention because it has good measurement properties, in which the log-odds of preferring statement A to statement B are defined as a competition between two parts: the sum of a person’s latent trait to which statement A is measuring and statement A’s utility, and the sum of a person’s latent trait to which statement B is measuring and statement B’s utility. The RIM has been extended to polytomous responses, such as preferring statement A strongly, preferring statement A, preferring statement B, and preferring statement B strongly. To promote the new initiatives, in this study we developed computerized adaptive testing algorithms for MFC items and evaluated their performance using simulations and two real tests. Both the RIM and its polytomous extension are multidimensional, which calls for multidimensional computerized adaptive testing (MCAT). A particular issue in MCAT for MPC items is the within-person statement exposure (WPSE); that is, a respondent may keep seeing the same statement (e.g., my life is empty) for many times, which is certainly annoying. In this study, we implemented two methods to control the WPSE rate. In the first control method, items would be frozen when their statements had been administered more than a prespecified times. In the second control method, a random component was added to control the contribution of the information at different stages of MCAT. The second control method was found to outperform the first control method in our simulation studies. In addition, we investigated four item selection methods: (a) random selection (as a baseline), (b) maximum Fisher information method without WPSE control, (c) maximum Fisher information method with the first control method, and (d) maximum Fisher information method with the second control method. These four methods were applied to two real tests: one was a work survey with dichotomous MPC items and the other is a career interests survey with polytomous MPC items. There were three dependent variables: the bias and root mean square error across person measures, and measurement efficiency which was defined as the number of items needed to achieve the same degree of test reliability. Both applications indicated that the proposed MCAT algorithms were successful and there was no loss in measurement proficiency when the control methods were implemented, and among the four methods, the last method performed the best.

Keywords: computerized adaptive testing, ipsative tests, item response theory, pairwise comparison

Procedia PDF Downloads 248
4569 Evaluation of Prestressed Reinforced Concrete Slab Punching Shear Using Finite Element Method

Authors: Zhi Zhang, Liling Cao, Seyedbabak Momenzadeh, Lisa Davey

Abstract:

Reinforced concrete (RC) flat slab-column systems are commonly used in residential or office buildings, as the flat slab provides efficient clearance resulting in more stories at a given height than regular reinforced concrete beam-slab system. Punching shear of slab-column joints is a critical component of two-way reinforced concrete flat slab design. The unbalanced moment at the joint is transferred via slab moment and shear forces. ACI 318 provides an equation to evaluate the punching shear under the design load. It is important to note that the design code considers gravity and environmental load when considering the design load combinations, while it does not consider the effect from differential foundation settlement, which may be a governing load condition for the slab design. This paper describes how prestressed reinforced concrete slab punching shear is evaluated based on ACI 318 provisions and finite element analysis. A prestressed reinforced concrete slab under differential settlements is studied using the finite element modeling methodology. The punching shear check equation is explained. The methodology to extract data for punching shear check from the finite element model is described and correlated with the corresponding code provisions. The study indicates that the finite element analysis results should be carefully reviewed and processed in order to perform accurate punching shear evaluation. Conclusions are made based on the case studies to help engineers understand the punching shear behavior in prestressed and non-prestressed reinforced concrete slabs.

Keywords: differential settlement, finite element model, prestressed reinforced concrete slab, punching shear

Procedia PDF Downloads 133
4568 Network Analysis of Genes Involved in the Biosynthesis of Medicinally Important Naphthodianthrone Derivatives of Hypericum perforatum

Authors: Nafiseh Noormohammadi, Ahmad Sobhani Najafabadi

Abstract:

Hypericins (hypericin and pseudohypericin) are natural napthodianthrone derivatives produced by Hypericum perforatum (St. John’s Wort), which have many medicinal properties such as antitumor, antineoplastic, antiviral, and antidepressant activities. Production and accumulation of hypericin in the plant are influenced by both genetic and environmental conditions. Despite the existence of different high-throughput data on the plant, genetic dimensions of hypericin biosynthesis have not yet been completely understood. In this research, 21 high-quality RNA-seq data on different parts of the plant were integrated into metabolic data to reconstruct a coexpression network. Results showed that a cluster of 30 transcripts was correlated with total hypericin. The identified transcripts were divided into three main groups based on their functions, including hypericin biosynthesis genes, transporters, detoxification genes, and transcription factors (TFs). In the biosynthetic group, different isoforms of polyketide synthase (PKSs) and phenolic oxidative coupling proteins (POCPs) were identified. Phylogenetic analysis of protein sequences integrated into gene expression analysis showed that some of the POCPs seem to be very important in the biosynthetic pathway of hypericin. In the TFs group, six TFs were correlated with total hypericin. qPCR analysis of these six TFs confirmed that three of them were highly correlated. The identified genes in this research are a rich resource for further studies on the molecular breeding of H. perforatum in order to obtain varieties with high hypericin production.

Keywords: hypericin, St. John’s Wort, data mining, transcription factors, secondary metabolites

Procedia PDF Downloads 96
4567 Application of Interferometric Techniques for Quality Control Oils Used in the Food Industry

Authors: Andres Piña, Amy Meléndez, Pablo Cano, Tomas Cahuich

Abstract:

The purpose of this project is to propose a quick and environmentally friendly alternative to measure the quality of oils used in food industry. There is evidence that repeated and indiscriminate use of oils in food processing cause physicochemical changes with formation of potentially toxic compounds that can affect the health of consumers and cause organoleptic changes. In order to assess the quality of oils, non-destructive optical techniques such as Interferometry offer a rapid alternative to the use of reagents, using only the interaction of light on the oil. Through this project, we used interferograms of samples of oil placed under different heating conditions to establish the changes in their quality. These interferograms were obtained by means of a Mach-Zehnder Interferometer using a beam of light from a HeNe laser of 10mW at 632.8nm. Each interferogram was captured, analyzed and measured full width at half-maximum (FWHM) using the software from Amcap and ImageJ. The total of FWHMs was organized in three groups. It was observed that the average obtained from each of the FWHMs of group A shows a behavior that is almost linear, therefore it is probable that the exposure time is not relevant when the oil is kept under constant temperature. Group B exhibits a slight exponential model when temperature raises between 373 K and 393 K. Results of the t-Student show a probability of 95% (0.05) of the existence of variation in the molecular composition of both samples. Furthermore, we found a correlation between the Iodine Indexes (Physicochemical Analysis) and the Interferograms (Optical Analysis) of group C. Based on these results, this project highlights the importance of the quality of the oils used in food industry and shows how Interferometry can be a useful tool for this purpose.

Keywords: food industry, interferometric, oils, quality control

Procedia PDF Downloads 377
4566 Determination of Performances of Some Mulberry (Morus spp.) Species Selected from Different Places of Turkey under Kahramanmaras Conditions

Authors: Muruvvet Ilgin, Ilknur Agca

Abstract:

Common mulberry (Morus levigate Wall.) and purple mulberry (Morus rubra L.) species which were selected from different regions of Turkey were used as material in order to determine their performance. Therefore, phenological observations, pomological analysis (fruit size, fruit weight, fruit stalk length, acidity and TSS (Total Soluble Solids) and phytochemical properties organic acids (oxalic acid, succinic acid, citric acid, fumaric acid and malic acid) and vitamin C (ascorbic acid) total phenolics and antioxidant capacity values of mulberries) were determined. Phenological observations of seven different periods were also identified. Fruit weight values varied between 3.48 to 4.26 g. TSS contents value were from 14.36 to 21.30%, and fruit acidity was determined between 0.29 to 2.02%. The amount of ascorbic acid of Finger mulberry (Morus levigate Wall.) and purple mulberry (Morus rubra L.) species were identified as 35.60% and 363.28%. The highest value of total phenolic contents belonged to with a finger mulberry genotypes P1 934.80 mg/100g whereas the lowest one was of purple mulberry genotypes 278.70 mg/100g. FRAP and TEAC methods were used for determination of antioxidant capacity of the values of 0.58-22.65 micromol TE/kg and 20.34-31.6 micromol TE/kg. Total phenolics contents and antioxidant capacity strongly depends on fruit color intensity with a positive correlation. The obtained results have been found to be important as a source of future pharmacological studies and pomological and breeding programs.

Keywords: mulberry, phenology, phytochemical property, pomology

Procedia PDF Downloads 236
4565 Thermodynamic and Magnetic Properties of Heavy Fermion UTE₂ Superconductor

Authors: Habtamu Anagaw Muluneh, Gebregziabher Kahsay, Tamiru Negussie

Abstract:

Theoretical study of the density of state, condensation energy, specific heat, and magnetization in a spin-triplet superconductor are the main goals of this work. Utilizing the retarded double-time temperature-dependent Green's function formalism and building a model Hamiltonian for the system at hand, we were able to derive the expressions for the parameters mentioned above. The phase diagrams are plotted using MATLAB scripts. From the phase diagrams, the density of electrons increases as the excitation energy increases, and the maximum excitation energy is equal to the superconducting gap, but it decreases when the value exceeds the gap and finally becomes the same as the density of the normal state. On the other hand, the condensation energy decreases with the increase in temperature and attains its minimum value at the superconducting transition temperature but increases with the increase in superconducting transition temperature (TC) and finally becomes zero, implying the superconducting energy is equal to the normal state energy. The specific heat increases with the increase in temperature, attaining its maximum value at the TC and then undergoing a jump, showing the presence of a second-order phase transition from the superconducting state to the normal state. Finally, the magnetization of both the itinerant and localized electrons decreases with the increase in temperature and finally becomes zero at TC = 1.6 K and magnetic phase transition temperature T = 2 K, respectively, which results in a magnetic phase transition from a ferromagnetic to a paramagnetic state. Our finding is in good agreement with the previous findings.

Keywords: spin triplet superconductivity, Green’s function, condensation energy, density of state, specific heat, magnetization

Procedia PDF Downloads 28
4564 Effect of Infill Density and Pattern on the Compressive Strength of Parts Produced by Polylactic Acid Filament Using Fused Deposition Modelling

Authors: G. K. Awari, Vishwajeet V. Ambade, S. W. Rajurkar

Abstract:

The field of additive manufacturing is growing, and discoveries are being made. 3D printing machines are also being developed to accommodate a wider range of 3D printing materials, including plastics, metals (metal AM powders), composites, filaments, and other materials. There are numerous printing materials available for industrial additive manufacturing. Such materials have their unique characteristics, advantages, and disadvantages. In order to avoid errors in additive manufacturing, key elements such as 3D printing material type, texture, cost, printing technique and procedure, and so on must be examined. It can be complex to select the best material for a particular job. Polylactic acid (PLA) is made from sugar cane or cornstarch, both of which are renewable resources. "Black plastic" is another name for it. Because it is safe to use and print, it is frequently used in primary and secondary schools. This is also how FDM screen printing is done. PLA is simple to print because of its low warping impact. It's also possible to print it on a cold surface. When opposed to ABS, it allows for sharper edges and features to be printed. This material comes in a wide range of colours. Polylactic acid (PLA) is the most common material used in fused deposition modelling (FDM). PLA can be used to print a wide range of components, including medical implants, household items, and mechanical parts. The mechanical behaviour of the printed item is affected by variations in infill patterns that are subjected to compressive tests in the current investigation to examine their behaviour under compressive stresses.

Keywords: fused deposition modelling, polylactic acid, infill density, infill pattern, compressive strength

Procedia PDF Downloads 78
4563 The Analysis of Competitive Balance Progress among Five Most Valuable Football Leagues from 1966 to 2015

Authors: Seyed Salahedin Naghshbandi, Zahra Bozorgzadeh, Leila Zakizadeh, Siavash Hamidzadeh

Abstract:

From the sport economy experts point of view, the existence of competitive balance among sport leagues and its numerous effects on league is an important and undeniable issue. In general, sport events fans are so eager to unpredictable results of competition in order to reach the top of excitement and necessary motivation for following competitions. The purpose of this research is to consider and compare the competitive balance among five provisional European football leagues (Spain, England, Italy, France and Germany) during 1966 - 2015 seasons. Research data are secondary and obtained from Premier League final tables of selected countries in 1966 - 2015 seasons. For analyzing data, C5 and C5ICB indicators used. whatever these indicators be less, more balance establishes in the league and vice-versa. The result showed that Le champion of France reached from 1,259 to 1,395; Italy Serie-A league from 1,316 to 1,432; England premier league from 1, 342 to 1,455; Germany Bundesliga from 1,238 to 1,465 and Spain La liga from 1,295 to 1,495. So by comparing C5ICB charts during 1966 - 2015 seasons, La liga of Spain moved more toward imbalance and enjoyed less balance with other European Leagues. Also, La champion of France during the mentioned season, enjoyed less imbalance and preserved its relative balance with monotonous process. It seems that football in France has been followed as stable during 1966 to 2015, and prediction of results was more difficult and competitions were so attractive for spectators, but in Italy, England, Germany, and Spain there were less balance, respectively.

Keywords: competitive balance, professional football league, competition, C5ICB

Procedia PDF Downloads 147
4562 The World in the 21st Century and Beyound: Convergence or Invariance

Authors: Saleh Maina

Abstract:

There is an on-going debate among intellectuals and scholars of international relations and world politics over the direction which the world is heading particularly in the current era of globalization. On the one hand are adherents to the convergence thesis which is premised on the assumption that global social order is tending toward universalism which could translate into the possible end of the classical state system and the unification of world societies under a single and common ideological dispensation. The convergence thesis is hinged on the globalization process which is gradually reducing world societies into a 'global village'. On the other hand are intellectuals who hold the view that despite advances made in communication technology which appear to threaten the survival of the classical state system. Invariance, as expressed in the survival of the existing state system and the diverse social traditions in world societies, remain a realistic possibility contrary to the conclusions of the convergence thesis. The invariance thesis has been advanced by scholars like Samuel P. Huntington whose work on clash of civilizations suggests that world peace can only be sustained through the co-habitation of diverse civilizations across the world. The purpose of this paper is to examine both sides of the debate with the aim of making a realistic assessment on where world societies are headed, between convergence and invariance. Using the realist theory of international relations as our theoretical premise the paper argues that while there is sufficient ground to predict the future direction of world societies as headed towards some form of convergence, invariance as expressed in the co-existence of diverse civilizations will for a long time remain a major feature of the international system.

Keywords: convergence, invariance, clash of civilization, classical state system, universalism

Procedia PDF Downloads 311
4561 Optical Variability of Faint Quasars

Authors: Kassa Endalamaw Rewnu

Abstract:

The variability properties of a quasar sample, spectroscopically complete to magnitude J = 22.0, are investigated on a time baseline of 2 years using three different photometric bands (U, J and F). The original sample was obtained using a combination of different selection criteria: colors, slitless spectroscopy and variability, based on a time baseline of 1 yr. The main goals of this work are two-fold: first, to derive the percentage of variable quasars on a relatively short time baseline; secondly, to search for new quasar candidates missed by the other selection criteria; and, thus, to estimate the completeness of the spectroscopic sample. In order to achieve these goals, we have extracted all the candidate variable objects from a sample of about 1800 stellar or quasi-stellar objects with limiting magnitude J = 22.50 over an area of about 0.50 deg2. We find that > 65% of all the objects selected as possible variables are either confirmed quasars or quasar candidates on the basis of their colors. This percentage increases even further if we exclude from our lists of variable candidates a number of objects equal to that expected on the basis of `contamination' induced by our photometric errors. The percentage of variable quasars in the spectroscopic sample is also high, reaching about 50%. On the basis of these results, we can estimate that the incompleteness of the original spectroscopic sample is < 12%. We conclude that variability analysis of data with small photometric errors can be successfully used as an efficient and independent (or at least auxiliary) selection method in quasar surveys, even when the time baseline is relatively short. Finally, when corrected for the different intrinsic time lags corresponding to a fixed observed time baseline, our data do not show a statistically significant correlation between variability and either absolute luminosity or redshift.

Keywords: nuclear activity, galaxies, active quasars, variability

Procedia PDF Downloads 86
4560 Integrated Free Space Optical Communication and Optical Sensor Network System with Artificial Intelligence Techniques

Authors: Yibeltal Chanie Manie, Zebider Asire Munyelet

Abstract:

5G and 6G technology offers enhanced quality of service with high data transmission rates, which necessitates the implementation of the Internet of Things (IoT) in 5G/6G architecture. In this paper, we proposed the integration of free space optical communication (FSO) with fiber sensor networks for IoT applications. Recently, free-space optical communications (FSO) are gaining popularity as an effective alternative technology to the limited availability of radio frequency (RF) spectrum. FSO is gaining popularity due to flexibility, high achievable optical bandwidth, and low power consumption in several applications of communications, such as disaster recovery, last-mile connectivity, drones, surveillance, backhaul, and satellite communications. Hence, high-speed FSO is an optimal choice for wireless networks to satisfy the full potential of 5G/6G technology, offering 100 Gbit/s or more speed in IoT applications. Moreover, machine learning must be integrated into the design, planning, and optimization of future optical wireless communication networks in order to actualize this vision of intelligent processing and operation. In addition, fiber sensors are important to achieve real-time, accurate, and smart monitoring in IoT applications. Moreover, we proposed deep learning techniques to estimate the strain changes and peak wavelength of multiple Fiber Bragg grating (FBG) sensors using only the spectrum of FBGs obtained from the real experiment.

Keywords: optical sensor, artificial Intelligence, Internet of Things, free-space optics

Procedia PDF Downloads 68
4559 A Qualitative Study Examining the Process of EFL Course Design from the Perspectives of Teachers

Authors: Iman Al Khalidi

Abstract:

Recently, English has become the language of globalization and technology. In turn, this has resulted in a seemingly bewildering array of influences and trends in the domain of TESOL curriculum. In light of these changes, higher education has to provide a new and more powerful kind of education. It should prepare students to be more engaged citizens, more capable to solve complex problems at work, and well prepared to lead meaningful life. In response to this, universities, colleges, schools, and departments have to work out in light of the requirements and challenges of the global and technological era. Consequently they have to focus on the adoption of contemporary curriculum which goes in line with the pedagogical shifts from teaching –centered approach to learning centered approach. Ideally, there has been noticeable emphasis on the crucial importance of developing and professionalizing teachers in order to engage them in the process of curriculum development and action research. This is a qualitative study that aims at understanding and exploring the process of designing EFL courses by teachers at the tertiary level from the perspectives of the participants in a professional context in TESOL, Department of English, a private college in Oman. It is a case study that stands on the philosophy of the qualitative approach. It employs multi methods for collecting qualitative data: semi-structured interviews with teachers, focus group discussions with students, and document analysis. The collected data have been analyzed qualitatively by adopting Miles and Huberman's Approach using procedures of reduction, coding, displaying and conclusion drawing and verification.

Keywords: course design, components of course design, case study, data analysis

Procedia PDF Downloads 547
4558 A Qualitative Study Examining the Process of Course Design from the Perspectives of Teachers

Authors: Iman Al Khalidi

Abstract:

Recently, English has become the language of globalization and technology. In turn, this has resulted in a seemingly bewildering array of influences and trends in the domain of TESOL curriculum. In light of these changes, higher education has to provide a new and more powerful kind of education. It should prepare students to be more engaged citizens, more capable to solve complex problems at work, and well prepared to lead a meaningful life. In response to this, universities, colleges, schools, and departments have to work out in light of the requirements and challenges of the global and technological era. Consequently, they have to focus on the adoption of contemporary curriculum which goes in line with the pedagogical shifts from teaching –centered approach to learning centered approach. Ideally, there has been noticeable emphasis on the crucial importance of developing and professionalizing teachers in order to engage them in the process of curriculum development and action research. This is a qualitative study that aims at understanding and exploring the process of designing EFL courses by teachers at the tertiary level from the perspectives of the participants in a professional context in TESOL, Department of English, a private college in Oman. It is a case study that stands on the philosophy of the qualitative approach. It employs multi-methods for collecting qualitative data: semi-structured interviews with teachers, focus group discussions with students, and document analysis. The collected data have been analyzed qualitatively by adopting Miles and Huberman's Approach using procedures of reduction, coding, displaying, and conclusion drawing and verification.

Keywords: course design, components of course design, case study, data analysis

Procedia PDF Downloads 443
4557 Metacognitive Processing in Early Readers: The Role of Metacognition in Monitoring Linguistic and Non-Linguistic Performance and Regulating Students' Learning

Authors: Ioanna Taouki, Marie Lallier, David Soto

Abstract:

Metacognition refers to the capacity to reflect upon our own cognitive processes. Although there is an ongoing discussion in the literature on the role of metacognition in learning and academic achievement, little is known about its neurodevelopmental trajectories in early childhood, when children begin to receive formal education in reading. Here, we evaluate the metacognitive ability, estimated under a recently developed Signal Detection Theory model, of a cohort of children aged between 6 and 7 (N=60), who performed three two-alternative-forced-choice tasks (two linguistic: lexical decision task, visual attention span task, and one non-linguistic: emotion recognition task) including trial-by-trial confidence judgements. Our study has three aims. First, we investigated how metacognitive ability (i.e., how confidence ratings track accuracy in the task) relates to performance in general standardized tasks related to students' reading and general cognitive abilities using Spearman's and Bayesian correlation analysis. Second, we assessed whether or not young children recruit common mechanisms supporting metacognition across the different task domains or whether there is evidence for domain-specific metacognition at this early stage of development. This was done by examining correlations in metacognitive measures across different task domains and evaluating cross-task covariance by applying a hierarchical Bayesian model. Third, using robust linear regression and Bayesian regression models, we assessed whether metacognitive ability in this early stage is related to the longitudinal learning of children in a linguistic and a non-linguistic task. Notably, we did not observe any association between students’ reading skills and metacognitive processing in this early stage of reading acquisition. Some evidence consistent with domain-general metacognition was found, with significant positive correlations between metacognitive efficiency between lexical and emotion recognition tasks and substantial covariance indicated by the Bayesian model. However, no reliable correlations were found between metacognitive performance in the visual attention span and the remaining tasks. Remarkably, metacognitive ability significantly predicted children's learning in linguistic and non-linguistic domains a year later. These results suggest that metacognitive skill may be dissociated to some extent from general (i.e., language and attention) abilities and further stress the importance of creating educational programs that foster students’ metacognitive ability as a tool for long term learning. More research is crucial to understand whether these programs can enhance metacognitive ability as a transferable skill across distinct domains or whether unique domains should be targeted separately.

Keywords: confidence ratings, development, metacognitive efficiency, reading acquisition

Procedia PDF Downloads 153
4556 Sustainability Assessment of Food Delivery with Last-Mile Delivery Droids, A Case Study at the European Commission's JRC Ispra Site

Authors: Ada Garus

Abstract:

This paper presents the outcomes of the sustainability assessment of food delivery with a last-mile delivery service introduced in a real-world case study. The methodology used in the sustainability assessment integrates multi-criteria decision-making analysis, sustainability pillars, and scenario analysis to best reflect the conflicting needs of stakeholders involved in the last mile delivery system. The case study provides an application of the framework to the food delivery system of the Joint Research Centre of the European Commission where three alternative solutions were analyzed I) the existent state in which individuals frequent the local cantine or pick up their food, using their preferred mode of transport II) the hypothetical scenario in which individuals can only order their food using the delivery droid system III) a scenario in which the food delivery droid based system is introduced as a supplement to the current system. The environmental indices are calculated using a simulation study in which decision regarding the food delivery is predicted using a multinomial logit model. The vehicle dynamics model is used to predict the fuel consumption of the regular combustion engines vehicles used by the cantine goers and the electricity consumption of the droid. The sustainability assessment allows for the evaluation of the economic, environmental, and social aspects of food delivery, making it an apt input for policymakers. Moreover, the assessment is one of the first studies to investigate automated delivery droids, which could become a frequent addition to the urban landscape in the near future.

Keywords: innovations in transportation technologies, behavioural change and mobility, urban freight logistics, innovative transportation systems

Procedia PDF Downloads 196
4555 Numerical Simulation of Free Surface Water Wave for the Flow Around NACA 0012 Hydrofoil and Wigley Hull Using VOF Method

Authors: Omar Imine, Mohammed Aounallah, Mustapha Belkadi

Abstract:

Steady three-dimensional and two free surface waves generated by moving bodies are presented, the flow problem to be simulated is rich in complexity and poses many modeling challenges because of the existence of breaking waves around the ship hull, and because of the interaction of the two-phase flow with the turbulent boundary layer. The results of several simulations are reported. The first study was performed for NACA0012 of hydrofoil with different meshes, this section is analyzed at h/c= 1, 0345 for 2D. In the second simulation, a mathematically defined Wigley hull form is used to investigate the application of a commercial CFD code in prediction of the total resistance and its components from tangential and normal forces on the hull wetted surface. The computed resistance and wave profiles are used to estimate the coefficient of the total resistance for Wigley hull advancing in calm water under steady conditions. The commercial CFD software FLUENT version 12 is used for the computations in the present study. The calculated grid is established using the code computer GAMBIT 2.3.26. The shear stress k-ωSST model is used for turbulence modeling and the volume of the fluid technique is employed to simulate the free-surface motion. The second order upwind scheme is used for discretizing the convection terms in the momentum transport equations, the Modified HRICscheme for VOF discretization. The results obtained compare well with the experimental data.

Keywords: free surface flows, breaking waves, boundary layer, Wigley hull, volume of fluid

Procedia PDF Downloads 379
4554 Physico-Chemical Characterization of Vegetable Oils from Oleaginous Seeds (Croton megalocarpus, Ricinus communis L., and Gossypium hirsutum L.)

Authors: Patrizia Firmani, Sara Perucchini, Irene Rapone, Raffella Borrelli, Stefano Chiaberge, Manuela Grande, Rosamaria Marrazzo, Alberto Savoini, Andrea Siviero, Silvia Spera, Fabio Vago, Davide Deriu, Sergio Fanutti, Alessandro Oldani

Abstract:

According to the Renewable Energy Directive II, the use of palm oil in diesel will be gradually reduced from 2023 and should reach zero in 2030 due to the deforestation caused by its production. Eni aims at finding alternative feedstocks for its biorefineries to eliminate the use of palm oil by 2023. Therefore, the ideal vegetable oils to be used in bio-refineries are those obtainable from plants that grow in marginal lands and with low impact on food-and-feed chain; hence, Eni research is studying the possibility of using oleaginous seeds, such as castor, croton, and cotton, to extract the oils to be exploited as feedstock in bio-refineries. To verify their suitability for the upgrading processes, an analytical protocol for their characterization has been drawn up and applied. The analytical characterizations include a step of water and ashes content determination, elemental analysis (CHNS analysis, X-Ray Fluorescence, Inductively Coupled Plasma - Optical Emission Spectroscopy, ICP– Mass Spectrometry), and total acid number determination. Gas chromatography coupled to flame ionization detector (GC-FID) is used to quantify the lipid content in terms of free fatty acids, mono-, di- and triacylglycerols, and fatty acids composition. Eventually, Nuclear Magnetic Resonance and Fourier Transform-Infrared spectroscopies are exploited with GC-MS and Fourier Transform-Ion Cyclotron Resonance to study the composition of the oils. This work focuses on the GC-FID analysis of the lipid fraction of these oils, as the main constituent and of greatest interest for bio-refinery processes. Specifically, the lipid component of the extracted oil was quantified after sample silanization and transmethylation: silanization allows the elution of high-boiling compounds and is useful for determining the quantity of free acids and glycerides in oils, while transmethylation leads to a mixture of fatty acid esters and glycerol, thus allowing to evaluate the composition of glycerides in terms of Fatty Acids Methyl Esters (FAME). Cotton oil was extracted from cotton oilcake, croton oil was obtained by seeds pressing and seeds and oilcake ASE extraction, while castor oil comes from seed pressing (not performed in Eni laboratories). GC-FID analyses reported that the cotton oil is 90% constituted of triglycerides and about 6% diglycerides, while free fatty acids are about 2%. In terms of FAME, C18 acids make up 70% of the total and linoleic acid is the major constituent. Palmitic acid is present at 17.5%, while the other acids are in low concentration (<1%). Both analyzes show the presence of non-gas chromatographable compounds. Croton oils from seed pressing and extraction mainly contain triglycerides (98%). Concerning FAME, the main component is linoleic acid (approx. 80%). Oilcake croton oil shows higher abundance of diglycerides (6% vs ca 2%) and a lower content of triglycerides (38% vs 98%) compared to the previous oils. Eventually, castor oil is mostly constituted of triacylglycerols (about 69%), followed by diglycerides (about 10%). About 85.2% of total FAME is ricinoleic acid, as a constituent of triricinolein, the most abundant triglyceride of castor oil. Based on the analytical results, these oils represent feedstocks of interest for possible exploitation as advanced biofuels.

Keywords: analytical protocol, biofuels, biorefinery, gas chromatography, vegetable oil

Procedia PDF Downloads 148
4553 The Influence of Imposter Phenomenon on the Experiences of Intimacy in Non-Binary Young Adults

Authors: Muskan Jain, Baiju Gopal

Abstract:

Objectives: Intimacy in interpersonal relationships is integral to psychological health and everyday wellbeing; the focus is on intimacy, which can be described as feelings of closeness, connection, and belonging within relationships, which is influenced by an individual's gender identity as well as life experiences. The study aims to explore the experiences of intimacy of the non-binary gender; this marginalized community has increased risks of developing the imposter phenomenon. The study explores the influence of IP on the development and sustenance of intimacy in relationships. Methods: The present study accumulates detailed narratives from 10 non-binary young adults ages 18 to 25 in metropolitan cities of India. Thematic analysis was used for the data analysis. Results: Seven major themes have emerged revolving around internalized criticism and self-depreciating behavior, which causes distance between partners. The four themes that result in the internalization of criticism are lack of social stability, invalidation by social units, adverse life experiences, and estrangement due to gender identity. Three themes that encapsulate major difficulties in relationships are limited self-disclosure, inhibition of physical needs, and fear of taking space. The findings have been critically compared and contrasted with the existing body of literature in the domain, which sets the agenda for further inquiry. Conclusion: It is important for future studies to capture the experiences of non-binary genders in India to provide better therapeutic support in order to assist them in forming meaningful and authentic relationships, thus increasing overall wellbeing.

Keywords: imposter phenomenon, intimacy, internalized criticism, marginalized community

Procedia PDF Downloads 62
4552 A Generalised Propensity Score Analysis to Investigate the Influence of Agricultural Research Systems on Greenhouse Gas Emissions

Authors: Spada Alessia, Fiore Mariantonietta, Lamonaca Emilia, Contò Francesco

Abstract:

Bioeconomy can give the chance to face new global challenges and can move ahead the transition from a waste economy to an economy based on renewable resources and sustainable consumption. Air pollution is a grave issue in green challenges, mainly caused by anthropogenic factors. The agriculture sector is a great contributor to global greenhouse gases (GHGs) emissions due to lacking efficient management of the resources involved and research policies. In particular, livestock sector contributes to emissions of GHGs, deforestation, and nutrient imbalances. More effective agricultural research systems and technologies are crucial in order to improve farm productivity but also to reduce the GHGs emissions. Using data from FAOSTAT statistics and concern the EU countries; the aim of this research is to evaluate the impact of ASTI R&D (Agricultural Science and Technology Indicators) on GHGs emissions for countries EU in 2015 by generalized propensity score procedures, estimating a dose-response function, also considering a set of covariates. Expected results show the existence of the influence of ASTI R&D on GHGs across EU countries. Implications are crucial: reducing GHGs emissions by means of R&D based policies and correlatively reaching eco-friendly management of required resources by means of green available practices could have a crucial role for fair intra-generational implications.

Keywords: agricultural research systems, dose-response function, generalized propensity score, GHG emissions

Procedia PDF Downloads 279
4551 Investigating the Minimum RVE Size to Simulate Poly (Propylene carbonate) Composites Reinforced with Cellulose Nanocrystals as a Bio-Nanocomposite

Authors: Hamed Nazeri, Pierre Mertiny, Yongsheng Ma, Kajsa Duke

Abstract:

The background of the present study is the use of environment-friendly biopolymer and biocomposite materials. Among the recently introduced biopolymers, poly (propylene carbonate) (PPC) has been gaining attention. This study focuses on the size of representative volume elements (RVE) in order to simulate PPC composites reinforced by cellulose nanocrystals (CNCs) as a bio-nanocomposite. Before manufacturing nanocomposites, numerical modeling should be implemented to explore and predict mechanical properties, which may be accomplished by creating and studying a suitable RVE. In other studies, modeling of composites with rod shaped fillers has been reported assuming that fillers are unidirectionally aligned. But, modeling of non-aligned filler dispersions is considerably more difficult. This study investigates the minimum RVE size to enable subsequent FEA modeling. The matrix and nano-fillers were modeled using the finite element software ABAQUS, assuming randomly dispersed fillers with a filler mass fraction of 1.5%. To simulate filler dispersion, a Monte Carlo technique was employed. The numerical simulation was implemented to find composite elastic moduli. After commencing the simulation with a single filler particle, the number of particles was increased to assess the minimum number of filler particles that satisfies the requirements for an RVE, providing the composite elastic modulus in a reliable fashion.

Keywords: biocomposite, Monte Carlo method, nanocomposite, representative volume element

Procedia PDF Downloads 447
4550 Capital Accumulation and Unemployment in Namibia, Nigeria and South Africa

Authors: Abubakar Dikko

Abstract:

The research investigates the causes of unemployment in Namibia, Nigeria and South Africa, and the role of Capital Accumulation in reducing the unemployment profile of these economies as proposed by the post-Keynesian economics. This is conducted through extensive review of literature on the NAIRU models and focused on the post-Keynesian view of unemployment within the NAIRU framework. The NAIRU (non-accelerating inflation rate of unemployment) model has become a dominant framework used in macroeconomic analysis of unemployment. The study views the post-Keynesian economics arguments that capital accumulation is a major determinant of unemployment. Unemployment remains the fundamental socio-economic challenge facing African economies. It has been a burden to citizens of those economies. Namibia, Nigeria and South Africa are great African nations battling with high unemployment rates. In 2013, the countries recorded high unemployment rates of 16.9%, 23.9% and 24.9% respectively. Most of the unemployed in these economies comprises of youth. Roughly about 40% working age South Africans has jobs, whereas in Nigeria and Namibia is less than that. Unemployment in Africa has wide implications on households which has led to extensive poverty and inequality, and created a rampant criminality. Recently in South Africa there has been a case of xenophobic attacks which were caused by the citizens of the country as a result of unemployment. The high unemployment rate in the country led the citizens to chase away foreigners in the country claiming that they have taken away their jobs. The study proposes that there is a strong relationship between capital accumulation and unemployment in Namibia, Nigeria and South Africa, and capital accumulation is responsible for high unemployment rates in these countries. For the economies to achieve steady state level of employment and satisfactory level of economic growth and development there is need for capital accumulation to take place. The countries in the study have been selected after a critical research and investigations. They are selected based on the following criteria; African economies with high unemployment rates above 15% and have about 40% of their workforce unemployed. This level of unemployment is the critical level of unemployment in Africa as expressed by International Labour Organization (ILO). The African countries with low level of capital accumulation. Adequate statistical measures have been employed using a time-series analysis in the study and the results revealed that capital accumulation is the main driver of unemployment performance in the chosen African countries. An increase in the accumulation of capital causes unemployment to reduce significantly. The results of the research work will be useful and relevant to federal governments and ministries, departments and agencies (MDAs) of Namibia, Nigeria and South Africa to resolve the issue of high and persistent unemployment rates in their economies which are great burden that slows growth and development of developing economies. Also, the result can be useful to World Bank, African Development Bank and International Labour Organization (ILO) in their further research and studies on how to tackle unemployment in developing and emerging economies.

Keywords: capital accumulation, unemployment, NAIRU, Post-Keynesian economics

Procedia PDF Downloads 268
4549 The Use of Sustainability Criteria on Infrastructure Design to Encourage Sustainable Engineering Solutions on Infrastructure Projects

Authors: Shian Saroop, Dhiren Allopi

Abstract:

In order to stay competitive and to meet upcoming stricter environmental regulations and customer requirements, designers have a key role in designing civil infrastructure so that it is environmentally sustainable. There is an urgent need for engineers to apply technologies and methods that deliver better and more sustainable performance of civil infrastructure as well as a need to establish a standard of measurement for greener infrastructure, rather than merely use tradition solutions. However, there are no systems in place at the design stage that assesses the environmental impact of design decisions on township infrastructure projects. This paper identifies alternative eco-efficient civil infrastructure design solutions and developed sustainability criteria and a toolkit to analyse the eco efficiency of infrastructure projects. The proposed toolkit is aimed at promoting high-performance, eco-efficient, economical and environmentally friendly design decisions on stormwater, roads, water and sanitation related to township infrastructure projects. These green solutions would bring a whole new class of eco-friendly solutions to current infrastructure problems, while at the same time adding a fresh perspective to the traditional infrastructure design process. A variety of projects were evaluated using the green infrastructure toolkit and their results are compared to each other, to assess the results of using greener infrastructure verses the traditional method of designing infrastructure. The application of ‘green technology’ would ensure a sustainable design of township infrastructure services assisting the design to consider alternative resources, the environmental impacts of design decisions, ecological sensitivity issues, innovation, maintenance and materials, at the design stage of a project.

Keywords: eco-efficiency, green infrastructure, infrastructure design, sustainable development

Procedia PDF Downloads 229
4548 Emulsified Oil Removal in Produced Water by Graphite-Based Adsorbents Using Adsorption Coupled with Electrochemical Regeneration

Authors: Zohreh Fallah, Edward P. L. Roberts

Abstract:

One of the big challenges for produced water treatment is removing oil from water in the form of emulsified droplets which are not easily separated. An attractive approach is adsorption, as it is a simple and effective process. However, adsorbents must be regenerated in order to make the process cost effective. Several sorbents have been tested for treating oily wastewater. However, some issues such as high energy consumption for activated carbon thermal regeneration have been reported. Due to their significant electrical conductivity, Graphite Intercalation Compounds (GIC) were found to be suitable to be regenerated electrochemically. They are non-porous materials with low surface area and fast adsorptive capacity which are useful for removal of low concentration of organics. An innovative adsorption/regeneration process has been developed at the University of Manchester in which adsorption of organics are done by using a patented GIC adsorbent coupled with subsequent electrochemical regeneration. The oxidation of adsorbed organics enables 100% regeneration so that the adsorbent can be reused over multiple adsorption cycles. GIC adsorbents are capable of removing a wide range of organics and pollutants; however, no comparable report is available for removal of emulsified oil in produced water using abovementioned process. In this study the performance of this technology for the removal of emulsified oil in wastewater was evaluated. Batch experiments were carried out to determine the adsorption kinetics and equilibrium isotherm for both real produced water and model emulsions. The amount of oil in wastewater was measured by using the toluene extraction/fluorescence analysis before and after adsorption and electrochemical regeneration cycles. It was found that oil in water emulsion could be successfully treated by the treatment process and More than 70% of oil was removed.

Keywords: adsorption, electrochemical regeneration, emulsified oil, produced water

Procedia PDF Downloads 585
4547 Climate Change Based Frontier Research in Landscape Architecture

Authors: Xiaoyan Wang, Zhongde Wang

Abstract:

The issue of climate change, which originated in the middle of the twentieth century, has become a focus of international political, academic, and non-governmental organizations and public attention. In order to address the problems caused by climate change, the Chinese government has proposed a dual-carbon target and taken some national measures, such as ecological priority and green low-carbon development. These goals and measures are highly aligned with the values of the landscape architecture industry. This is an opportunity for the architectural discipline and the landscape architecture industry, so it is very necessary to summarize and analyze the hotspots related to climate change in the field of building science in China, which can assist the landscape architecture industry and related organizations in formulating more rational professional goals and taking actions that contribute to the betterment of societal, environmental development. Through the study, it is found as follows: firstly, after 20 years of rapid development, the research on climate change in the major architectural disciplines has shown a trend of diversification of research perspectives, interdisciplinary cross-cutting, and broadening of content; secondly, the research contents of landscape architecture focuses on the strategies to adapt to climate change, such as selection of urban tree species, the urban green infrastructure space layout, and the resilient city. Finally, in the future, climate change-based landscape architecture research will make the content system more diversified, but at the same time, it is still necessary to further deepen the research on quantitative methodology and construct scale systematic planning and design methods.

Keywords: climate change, landscape architecture, knowledge mapping, cites-pace

Procedia PDF Downloads 58
4546 Worldwide Prosperity Through Democracy: A Cross-country Examination of the Impact of Democratization on Human Development from 1990

Authors: Martin Plener

Abstract:

Developmental and democratization research has a long tradition of focusing on the relationship between democratization and economic development. However, recent studies have shown that economic development is not adequate to measure the actual living conditions of civilian people. In consequence, it is unclear if a democratization process helps to improve people’s quality of life. This work addresses this issue by investigating the influence of democratization on the Human Development Index (HDI) created by the United Nations. The main objective is to study the relationship between democracy and human development and whether democratization positively impacts the living conditions of the population over time. The main mechanism which supports a positive impact is that democratic structures promote participation and political involvement of people from all social classes resulting in a better articulation of interests and thus accountability to the government. To study this issue, a panel regression with Fixed-Effects is conducted. By that, it is examined if democracy has a positive impact on the HDI (Hypothesis 1) and secondly if the same effect weakens in more developed democracies compared to less developed democracies (Hypothesis 2). The results do not reveal a direct positive relationship between the democratization of a country and its development of the HDI, not supporting H1 which denies the first hypothesis. In contrast to the assumption of H2, the effect of democratization on human development seems to be negatively correlated in countries in which democracy is barely developed. Therefore, both hypotheses must be discarded. The results indicate rather a positive correlation between economic development on human development. Therefore, the impact of democracy on the well-being of countries’ citizens needs to be reinvestigated in order to create a better understanding of how improved human development can be achieved.

Keywords: democracy, human development, modernization theory, HDI, TSCS

Procedia PDF Downloads 83
4545 Full Length Transcriptome Sequencing and Differential Expression Gene Analysis of Hybrid Larch under PEG Stress

Authors: Zhang Lei, Zhao Qingrong, Wang Chen, Zhang Sufang, Zhang Hanguo

Abstract:

Larch is the main afforestation and timber tree species in Northeast China, and drought is one of the main factors limiting the growth of Larch and other organisms in Northeast China. In order to further explore the mechanism of Larch drought resistance, PEG was used to simulate drought stress. The full-length sequencing of Larch embryogenic callus under PEG simulated drought stress was carried out by combining Illumina-Hiseq and SMRT-seq. A total of 20.3Gb clean reads and 786492 CCS reads were obtained from the second and third generation sequencing. The de-redundant transcript sequences were predicted by lncRNA, 2083 lncRNAs were obtained, and the target genes were predicted, and a total of 2712 target genes were obtained. The de-redundant transcripts were further screened, and 1654 differentially expressed genes (DEGs )were obtained. Among them, different DEGs respond to drought stress in different ways, such as oxidation-reduction process, starch and sucrose metabolism, plant hormone pathway, carbon metabolism, lignin catabolic/biosynthetic process and so on. This study provides basic full-length sequencing data for the study of Larch drought resistance, and excavates a large number of DEGs in response to drought stress, which helps us to further understand the function of Larch drought resistance genes and provides a reference for in-depth analysis of the molecular mechanism of Larch drought resistance.

Keywords: larch, drought stress, full-length transcriptome sequencing, differentially expressed genes

Procedia PDF Downloads 175
4544 Making Meaning, Authenticity, and Redefining a Future in Former Refugees and Asylum Seekers Detained in Australia

Authors: Lynne McCormack, Andrew Digges

Abstract:

Since 2013, the Australian government has enforced mandatory detention of anyone arriving in Australia without a valid visa, including those subsequently identified as a refugee or seeking asylum. While consistent with the increased use of immigration detention internationally, Australia’s use of offshore processing facilities both during and subsequent to refugee status determination processing has until recently remained a unique feature of Australia’s program of deterrence. The commonplace detention of refugees and asylum seekers following displacement is a significant and independent source of trauma and a contributory factor in adverse psychological outcomes. Officially, these individuals have no prospect of resettlement in Australia, are barred from applying for substantive visas, and are frequently and indefinitely detained in closed facilities such as immigration detention centres, or alternative places of detention, including hotels. It is also important to note that the limited access to Australia’s immigration detention population made available to researchers often means that data available for secondary analysis may be incomplete or delayed in its release. Further, studies into the lived experience of refugees and asylum seekers are typically cross-sectional and convenience sampled, employing a variety of designs and research methodologies that limit comparability and focused on the immediacy of the individual’s experience. Consequently, how former detainees make sense of their experience, redefine their future trajectory upon release, and recover a sense of authenticity and purpose, is unknown. As such, the present study sought the positive and negative subjective interpretations of 6 participants in Australia regarding their lived experiences as refugees and asylum seekers within Australia’s immigration detention system and its impact on their future sense of self. It made use of interpretative phenomenological analysis (IPA), a qualitative research methodology that is interested in how individuals make sense of, and ascribe meaning to, their unique lived experiences of phenomena. Underpinned by phenomenology, hermeneutics, and critical realism, this idiographic study aimed to explore both positive and negative subjective interpretations of former refugees and asylum seekers held in detention in Australia. It sought to understand how they make sense of their experiences, how detention has impacted their overall journey as displaced persons, and how they have moved forward in the aftermath of protracted detention in Australia. Examining the unique lived experiences of previously detained refugees and asylum seekers may inform the future development of theoretical models of posttraumatic growth among this vulnerable population, thereby informing the delivery of future mental health and resettlement services.

Keywords: mandatory detention, refugee, asylum seeker, authenticity, Interpretative phenomenological analysis

Procedia PDF Downloads 100
4543 Dietary Habits and Cardiovascular Risk factors Among the Patients of the Coronary Artery Disease: A Case Control Study

Authors: Muhammad Kamran Hanif Khan, Fahad Mushtaq

Abstract:

Globally, the death rate from cardiovascular disease has risen over the past 20 years, but especially in low and middle-income countries (LMICS), reports the World Health Organization (WHO). Around 17.5 million deaths, or 31% of all deaths worldwide in 2012, were attributed to CVD, 80% of which occurred in low- and middle-income nations, and eighty five percent of all worldwide disability is attributable to cardiovascular disease. This study assessed the dietary habit and Cardiovascular Risk factors among the patients of coronary artery disease against matched controls. The research was a case-control study. Sample size for this case-control study was 410 CAD cases and 410 healthy controls. The case-control ratio was 1:1. Patients diagnosed with coronary artery disease were recruited from the outpatient departments and emergency rooms of four hospitals in Pakistan. The ages of people who were diagnosed with coronary artery disease were not significantly different from (mean 57.97 7.39 years) the healthy controls (mean 57.12 6.73 years). In order to determine the relationship between food consumption and the two binary outcomes, logistic regression analysis was carried out. Chicken (0.340 (0.245-0.47), p-value 0.0001), beef (0.38 (0.254-0.56), p-value 0.0001), eggs (0.297 (0.208-0.426), p-value 0.0001), and junk food (0.249 (0.167-0.372), p-value 0.0001)) were protective, while yogurt consumption more than twice weekly was risk. Conclusion: In conclusion, poor dietary habits are closely linked to the risk of CAD. Investigations based on dietary trends offer vital and practical knowledge about societal patterns.

Keywords: dietary habbits, cardiovasculardisease, CVD risk factors, hypercholesterolemia

Procedia PDF Downloads 84
4542 Biomedical Application of Green Biosynthesis Magnetic Iron Oxide (Fe3O4) Nanoparticles Using Seaweed (Sargassum muticum) Aqueous Extract

Authors: Farideh Namvar, Rosfarizan Mohamed

Abstract:

In the field of nanotechnology, the use of various biological units instead of toxic chemicals for the reduction and stabilization of nanoparticles, has received extensive attention. This use of biological entities to create nanoparticles has designated as “Green” synthesis and it is considered to be far more beneficial due to being economical, eco-friendly and applicable for large-scale synthesis as it operates on low pressure, less input of energy and low temperatures. The lack of toxic byproducts and consequent decrease in degradation of the product renders this technique more preferable over physical and classical chemical methods. The variety of biomass having reduction properties to produce nanoparticles makes them an ideal candidate for fabrication. Metal oxide nanoparticles have been said to represent a "fundamental cornerstone of nanoscience and nanotechnology" due to their variety of properties and potential applications. However, this also provides evidence of the fact that metal oxides include many diverse types of nanoparticles with large differences in chemical composition and behaviour. In this study, iron oxide nanoparticles (Fe3O4-NPs) were synthesized using a rapid, single step and completely green biosynthetic method by reduction of ferric chloride solution with brown seaweed (Sargassum muticum) water extract containing polysaccharides as a main factor which acts as reducing agent and efficient stabilizer. Antimicrobial activity against six microorganisms was tested using well diffusion method. The resulting S-IONPs are crystalline in nature, with a cubic shape. The average particle diameter, as determined by TEM, was found to be 18.01 nm. The S-IONPs were efficiently inhibited the growth of Listeria monocytogenes, Escherichia coli and Candida species. Our favorable results suggest that S-IONPs could be a promising candidate for development of future antimicrobial therapies. The nature of biosynthesis and the therapeutic potential by S-IONPs could pave the way for further research on design of green synthesis therapeutic agents, particularly nanomedicine, to deal with treatment of infections. Further studies are needed to fully characterize the toxicity and the mechanisms involved with the antimicrobial activity of these particles. Antioxidant activity of S-IONPs synthesized by green method was measured by ABTS (2, 2'-azino-bis (3-ethylbenzothiazoline-6-sulphonic acid) (IC50= 1000µg) radical scavenging activity. Also, with the increasing concentration of S-IONPs, catalase gene expression compared to control gene GAPDH increased. For anti-angiogenesis study the Ross fertilized eggs were divided into four groups; the control and three experimental groups. The gelatin sponges containing albumin were placed on the chorioalantoic membrane and soaked with different concentrations of S-IONPs. All the cases were photographed using a photo stereomicroscope. The number and the lengths of the vessels were measured using Image J software. The crown rump (CR) and weight of the embryo were also recorded. According to the data analysis, the number and length of the blood vessels, as well as the CR and weight of the embryos reduced significantly compared to the control (p < 0.05), dose dependently. The total hemoglobin was quantified as an indicator of the blood vessel formation, and in the treated samples decreased, which showed its inhibitory effect on angiogenesis.

Keywords: anti-angiogenesis, antimicrobial, antioxidant, biosynthesis, iron oxide (fe3o4) nanoparticles, sargassum muticum, seaweed

Procedia PDF Downloads 318