Search results for: final yield
477 Evolutionary Advantages of Loneliness with an Agent-Based Model
Authors: David Gottlieb, Jason Yoder
Abstract:
The feeling of loneliness is not uncommon in modern society, and yet, there is a fundamental lack of understanding in its origins and purpose in nature. One interpretation of loneliness is that it is a subjective experience that punishes a lack of social behavior, and thus its emergence in human evolution is seemingly tied to the survival of early human tribes. Still, a common counterintuitive response to loneliness is a state of hypervigilance, resulting in social withdrawal, which may appear maladaptive to modern society. So far, no computational model of loneliness’ effect during evolution yet exists; however, agent-based models (ABM) can be used to investigate social behavior, and applying evolution to agents’ behaviors can demonstrate selective advantages for particular behaviors. We propose an ABM where each agent contains four social behaviors, and one goal-seeking behavior, letting evolution select the best behavioral patterns for resource allocation. In our paper, we use an algorithm similar to the boid model to guide the behavior of agents, but expand the set of rules that govern their behavior. While we use cohesion, separation, and alignment for simple social movement, our expanded model adds goal-oriented behavior, which is inspired by particle swarm optimization, such that agents move relative to their personal best position. Since agents are given the ability to form connections by interacting with each other, our final behavior guides agent movement toward its social connections. Finally, we introduce a mechanism to represent a state of loneliness, which engages when an agent's perceived social involvement does not meet its expected social involvement. This enables us to investigate a minimal model of loneliness, and using evolution we attempt to elucidate its value in human survival. Agents are placed in an environment in which they must acquire resources, as their fitness is based on the total resource collected. With these rules in place, we are able to run evolution under various conditions, including resource-rich environments, and when disease is present. Our simulations indicate that there is strong selection pressure for social behavior under circumstances where there is a clear discrepancy between initial resource locations, and against social behavior when disease is present, mirroring hypervigilance. This not only provides an explanation for the emergence of loneliness, but also reflects the diversity of response to loneliness in the real world. In addition, there is evidence of a richness of social behavior when loneliness was present. By introducing just two resource locations, we observed a divergence in social motivation after agents became lonely, where one agent learned to move to the other, who was in a better resource position. The results and ongoing work from this project show that it is possible to glean insight into the evolutionary advantages of even simple mechanisms of loneliness. The model we developed has produced unexpected results and has led to more questions, such as the impact loneliness would have at a larger scale, or the effect of creating a set of rules governing interaction beyond adjacency.Keywords: agent-based, behavior, evolution, loneliness, social
Procedia PDF Downloads 97476 Improvement of Activity of β-galactosidase from Kluyveromyces lactis via Immobilization on Polyethylenimine-Chitosan
Authors: Carlos A. C. G. Neto, Natan C. G. e Silva , Thaís de O. Costa, Luciana R. B. Gonçalves, Maria V. P. Rocha
Abstract:
β-galactosidases (E.C. 3.2.1.23) are enzymes that have attracted by catalyzing the hydrolysis of lactose and in producing galacto-oligosaccharides by favoring transgalactosylation reactions. These enzymes, when immobilized, can have some enzymatic characteristics substantially improved, and the coating of supports with multifunctional polymers is a promising alternative to enhance the stability of the biocatalysts, among which polyethylenimine (PEI) stands out. PEI has certain properties, such as being a flexible polymer that suits the structure of the enzyme, giving greater stability, especially for multimeric enzymes such as β-galactosidases. Besides that, protects them from environmental variations. The use of chitosan support coated with PEI could improve the catalytic efficiency of β-galactosidase from Kluyveromyces lactis in the transgalactosylation reaction for the production of prebiotics, such as lactulose since this strain is more effective in the hydrolysis reaction. In this context, the aim of the present work was first to develop biocatalysts of β-galactosidase from K. lactis immobilized on chitosan-coated with PEI, determining the immobilization parameters, its operational and thermal stability, and then to apply it in hydrolysis and transgalactolisation reactions to produce lactulose using whey as a substrate. The immobilization of β-galactosidase in chitosan previously functionalized with 0.8% (v/v) glutaraldehyde and then coated with 10% (w/v) PEI solution was evaluated using an enzymatic load of 10 mg protein per gram support. Subsequently, the hydrolysis and transgalactosylation reactions were conducted at 50 °C, 120 RPM for 20 minutes, using whey supplemented with fructose at a ratio of 1:2 lactose/fructose, totaling 200 g/L. Operational stability studies were performed in the same conditions for 10 cycles. Thermal stabilities of biocatalysts were conducted at 50 ºC in 50 mM phosphate buffer, pH 6.6 with 0.1 mM MnCl2. The biocatalyst whose support was coated was named CHI_GLU_PEI_GAL, and the one that was not coated was named CHI_GLU_GAL. The coating of the support with PEI considerably improved the parameters of immobilization. The immobilization yield increased from 56.53% to 97.45%, biocatalyst activity from 38.93 U/g to 95.26 U/g and the efficiency from 3.51% to 6.0% for uncoated and coated support, respectively. The biocatalyst CHI_GLU_PEI_GAL was better than CHI_GLU_GAL in the hydrolysis of lactose and production of lactulose, converting 97.05% of lactose at 5 min of reaction and producing 7.60 g/L lactulose in the same time interval. QUI_GLU_PEI_GAL biocatalyst was stable in the hydrolysis reactions of lactose during the 10 cycles evaluated, converting 73.45% lactose even after the tenth cycle, and in the lactulose production was stable until the fifth cycle evaluated, producing 10.95 g/L lactulose. However, the thermal stability of CHI_GLU_GAL biocatalyst was superior, with a half-life time 6 times higher, probably because the enzyme was immobilized by covalent bonding, which is stronger than adsorption (CHI_GLU_PEI_GAL). Therefore, the strategy of coating the supports with PEI has proven to be effective for the immobilization of β-galactosidase from K. lactis, considerably improving the immobilization parameters, as well as, the catalytic action of the enzyme. Besides that, this process can be economically viable due to the use of an industrial residue as a substrate.Keywords: β-galactosidase, immobilization, kluyveromyces lactis, lactulose, polyethylenimine, transgalactosylation reaction, whey
Procedia PDF Downloads 112475 Prevalence of Antibiotic Resistant Enterococci in Treated Wastewater Effluent in Durban, South Africa and Characterization of Vancomycin and High-Level Gentamicin-Resistant Strains
Authors: S. H. Gasa, L. Singh, B. Pillay, A. O. Olaniran
Abstract:
Wastewater treatment plants (WWTPs) have been implicated as the leading reservoir for antibiotic resistant bacteria (ARB), including Enterococci spp. and antibiotic resistance genes (ARGs), worldwide. Enterococci are a group of clinically significant bacteria that have gained much attention as a result of their antibiotic resistance. They play a significant role as the principal cause of nosocomial infections and dissemination of antimicrobial resistance genes in the environment. The main objective of this study was to ascertain the role of WWTPs in Durban, South Africa as potential reservoirs for antibiotic resistant Enterococci (ARE) and their related ARGs. Furthermore, the antibiogram and resistance gene profile of Enterococci species recovered from treated wastewater effluent and receiving surface water in Durban were also investigated. Using membrane filtration technique, Enterococcus selective agar and selected antibiotics, ARE were enumerated in samples (influent, activated sludge, before chlorination and final effluent) collected from two WWTPs, as well as from upstream and downstream of the receiving surface water. Two hundred Enterococcus isolates recovered from the treated effluent and receiving surface water were identified by biochemical and PCR-based methods, and their antibiotic resistance profiles determined by the Kirby-Bauer disc diffusion assay, while PCR-based assays were used to detect the presence of resistance and virulence genes. High prevalence of ARE was obtained at both WWTPs, with values reaching a maximum of 40%. The influent and activated sludge samples contained the greatest prevalence of ARE with lower values observed in the before and after chlorination samples. Of the 44 vancomycin and high-level gentamicin-resistant isolates, 11 were identified as E. faecium, 18 as E. faecalis, 4 as E. hirae while 11 are classified as “other” Enterococci species. High-level aminoglycoside resistance for gentamicin (39%) and vancomycin (61%) was recorded in species tested. The most commonly detected virulence gene was the gelE (44%), followed by asa1 (40%), while cylA and esp were detected in only 2% of the isolates. The most prevalent aminoglycoside resistance genes were aac(6')-Ie-aph(2''), aph(3')-IIIa, and ant(6')-Ia detected in 43%, 45% and 41% of the isolates, respectively. Positive correlation was observed between resistant phenotypes to high levels of aminoglycosides and presence of all aminoglycoside resistance genes. Resistance genes for glycopeptide: vanB (37%) and vanC-1 (25%), and macrolide: ermB (11%) and ermC (54%) were detected in the isolates. These results show the need for more efficient wastewater treatment and disposal in order to prevent the release of virulent and antibiotic resistant Enterococci species and safeguard public health.Keywords: antibiogram, enterococci, gentamicin, vancomycin, virulence signatures
Procedia PDF Downloads 220474 Food Composition Tables Used as an Instrument to Estimate the Nutrient Ingest in Ecuador
Authors: Ortiz M. Rocío, Rocha G. Karina, Domenech A. Gloria
Abstract:
There are several tools to assess the nutritional status of the population. A main instrument commonly used to build those tools is the food composition tables (FCT). Despite the importance of FCT, there are many error sources and variability factors that can be presented on building those tables and can lead to an under or over estimation of ingest of nutrients of a population. This work identified different food composition tables used as an instrument to estimate the nutrient ingest in Ecuador.The collection of data for choosing FCT was made through key informants –self completed questionnaires-, supplemented with institutional web research. A questionnaire with general variables (origin, year of edition, etc) and methodological variables (method of elaboration, information of the table, etc) was passed to the identified FCT. Those variables were defined based on an extensive literature review. A descriptive analysis of content was performed. Ten printed tables and three databases were reported which were all indistinctly treated as food composition tables. We managed to get information from 69% of the references. Several informants referred to printed documents that were not accessible. In addition, searching the internet was not successful. Of the 9 final tables, n=8 are from Latin America, and, n= 5 of these were constructed by indirect method (collection of already published data) having as a main source of information a database from the United States department of agriculture USDA. One FCT was constructed by using direct method (bromatological analysis) and has its origin in Ecuador. The 100% of the tables made a clear distinction of the food and its method of cooking, 88% of FCT expressed values of nutrients per 100g of edible portion, 77% gave precise additional information about the use of the table, and 55% presented all the macro and micro nutrients on a detailed way. The more complete FCT were: INCAP (Central America), Composition of foods (Mexico). The more referred table was: Ecuadorian food composition table of 1965 (70%). The indirect method was used for most tables within this study. However, this method has the disadvantage that it generates less reliable food composition tables because foods show variations in composition. Therefore, a database cannot accurately predict the composition of any isolated sample of a food product.In conclusion, analyzing the pros and cons, and, despite being a FCT elaborated by using an indirect method, it is considered appropriate to work with the FCT of INCAP Central America, given the proximity to our country and a food items list that is very similar to ours. Also, it is imperative to have as a reference the table of composition for Ecuadorian food, which, although is not updated, was constructed using the direct method with Ecuadorian foods. Hence, both tables will be used to elaborate a questionnaire with the purpose of assessing the food consumption of the Ecuadorian population. In case of having disparate values, we will proceed by taking just the INCAP values because this is an updated table.Keywords: Ecuadorian food composition tables, FCT elaborated by direct method, ingest of nutrients of Ecuadorians, Latin America food composition tables
Procedia PDF Downloads 432473 Degradation of the Cu-DOM Complex by Bacteria: A Way to Increase Phytoextraction of Copper in a Vineyard Soil
Authors: Justine Garraud, Hervé Capiaux, Cécile Le Guern, Pierre Gaudin, Clémentine Lapie, Samuel Chaffron, Erwan Delage, Thierry Lebeau
Abstract:
The repeated use of Bordeaux mixture (copper sulphate) and other chemical forms of copper (Cu) has led to its accumulation in wine-growing soils for more than a century, to the point of modifying the ecosystem of these soils. Phytoextraction of copper could progressively reduce the Cu load in these soils, and even to recycle copper (e.g. as a micronutrient in animal nutrition) by cultivating the extracting plants in the inter-row of the vineyards. Soil cleaning up usually requires several years because the chemical speciation of Cu in solution is mainly based on forms complexed with dissolved organic matter (DOM) that are not phytoavailable, unlike the "free" forms (Cu2+). Indeed, more than 98% of Cu in the solution is bound to DOM. The selection and inoculation of invineyardsoils in vineyard soils ofbacteria(bioaugmentation) able to degrade Cu-DOM complexes could increase the phytoavailable pool of Cu2+ in the soil solution (in addition to bacteria which first mobilize Cu in solution from the soil bearing phases) in order to increase phytoextraction performance. In this study, sevenCu-accumulating plants potentially usable in inter-row were tested for their Cu phytoextraction capacity in hydroponics (ray-grass, brown mustard, buckwheat, hemp, sunflower, oats, and chicory). Also, a bacterial consortium was tested: Pseudomonas sp. previously studied for its ability to mobilize Cu through the pyoverdine siderophore (complexing agent) and potentially to degrade Cu-DOM complexes, and a second bacterium (to be selected) able to promote the survival of Pseudomonas sp. following its inoculation in soil. Interaction network method was used based on the notions of co-occurrence and, therefore, of bacterial abundance found in the same soils. Bacteria from the EcoVitiSol project (Alsace, France) were targeted. The final step consisted of incoupling the bacterial consortium with the chosen plant in soil pots. The degradation of Cu-DOMcomplexes is measured on the basis of the absorption index at 254nm, which gives insight on the aromaticity of the DOM. The“free” Cu in solution (from the mobilization of Cu and/or the degradation of Cu-MOD complexes) is assessed by measuring pCu. Eventually, Cu accumulation in plants is measured by ICP-AES. The selection of the plant is currently being finalized. The interaction network method targeted the best positive interactions ofFlavobacterium sp. with Pseudomonassp. These bacteria are both PGPR (plant growth promoting rhizobacteria) with the ability to improve the plant growth and to mobilize Cu from the soil bearing phases (siderophores). Also, these bacteria are known to degrade phenolic groups, which are highly present in DOM. They could therefore contribute to the degradation of DOM-Cu. The results of the upcoming bacteria-plant coupling tests in pots will be also presented.Keywords: complexes Cu-DOM, bioaugmentation, phytoavailability, phytoextraction
Procedia PDF Downloads 83472 Multi-Objective Optimization (Pareto Sets) and Multi-Response Optimization (Desirability Function) of Microencapsulation of Emamectin
Authors: Victoria Molina, Wendy Franco, Sergio Benavides, José M. Troncoso, Ricardo Luna, Jose R. PéRez-Correa
Abstract:
Emamectin Benzoate (EB) is a crystal antiparasitic that belongs to the avermectin family. It is one of the most common treatments used in Chile to control Caligus rogercresseyi in Atlantic salmon. However, the sea lice acquired resistance to EB when it is exposed at sublethal EB doses. The low solubility rate of EB and its degradation at the acidic pH in the fish digestive tract are the causes of the slow absorption of EB in the intestine. To protect EB from degradation and enhance its absorption, specific microencapsulation technologies must be developed. Amorphous Solid Dispersion techniques such as Spray Drying (SD) and Ionic Gelation (IG) seem adequate for this purpose. Recently, Soluplus® (SOL) has been used to increase the solubility rate of several drugs with similar characteristics than EB. In addition, alginate (ALG) is a widely used polymer in IG for biomedical applications. Regardless of the encapsulation technique, the quality of the obtained microparticles is evaluated with the following responses, yield (Y%), encapsulation efficiency (EE%) and loading capacity (LC%). In addition, it is important to know the percentage of EB released from the microparticles in gastric (GD%) and intestinal (ID%) digestions. In this work, we microencapsulated EB with SOL (EB-SD) and with ALG (EB-IG) using SD and IG, respectively. Quality microencapsulation responses and in vitro gastric and intestinal digestions at pH 3.35 and 7.8, respectively, were obtained. A central composite design was used to find the optimum microencapsulation variables (amount of EB, amount of polymer and feed flow). In each formulation, the behavior of these variables was predicted with statistical models. Then, the response surface methodology was used to find the best combination of the factors that allowed a lower EB release in gastric conditions, while permitting a major release at intestinal digestion. Two approaches were used to determine this. The desirability approach (DA) and multi-objective optimization (MOO) with multi-criteria decision making (MCDM). Both microencapsulation techniques allowed to maintain the integrity of EB in acid pH, given the small amount of EB released in gastric medium, while EB-IG microparticles showed greater EB release at intestinal digestion. For EB-SD, optimal conditions obtained with MOO plus MCDM yielded a good compromise among the microencapsulation responses. In addition, using these conditions, it is possible to reduce microparticles costs due to the reduction of 60% of BE regard the optimal BE proposed by (DA). For EB-GI, the optimization techniques used (DA and MOO) yielded solutions with different advantages and limitations. Applying DA costs can be reduced 21%, while Y, GD and ID showed 9.5%, 84.8% and 2.6% lower values than the best condition. In turn, MOO yielded better microencapsulation responses, but at a higher cost. Overall, EB-SD with operating conditions selected by MOO seems the best option, since a good compromise between costs and encapsulation responses was obtained.Keywords: microencapsulation, multiple decision-making criteria, multi-objective optimization, Soluplus®
Procedia PDF Downloads 131471 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method
Authors: Jurriaan Gillissen
Abstract:
This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence
Procedia PDF Downloads 224470 Phenomena-Based Approach for Automated Generation of Process Options and Process Models
Authors: Parminder Kaur Heer, Alexei Lapkin
Abstract:
Due to global challenges of increased competition and demand for more sustainable products/processes, there is a rising pressure on the industry to develop innovative processes. Through Process Intensification (PI) the existing and new processes may be able to attain higher efficiency. However, very few PI options are generally considered. This is because processes are typically analysed at a unit operation level, thus limiting the search space for potential process options. PI performed at more detailed levels of a process can increase the size of the search space. The different levels at which PI can be achieved is unit operations, functional and phenomena level. Physical/chemical phenomena form the lowest level of aggregation and thus, are expected to give the highest impact because all the intensification options can be described by their enhancement. The objective of the current work is thus, generation of numerous process alternatives based on phenomena, and development of their corresponding computer aided models. The methodology comprises: a) automated generation of process options, and b) automated generation of process models. The process under investigation is disintegrated into functions viz. reaction, separation etc., and these functions are further broken down into the phenomena required to perform them. E.g., separation may be performed via vapour-liquid or liquid-liquid equilibrium. A list of phenomena for the process is formed and new phenomena, which can overcome the difficulties/drawbacks of the current process or can enhance the effectiveness of the process, are added to the list. For instance, catalyst separation issue can be handled by using solid catalysts; the corresponding phenomena are identified and added. The phenomena are then combined to generate all possible combinations. However, not all combinations make sense and, hence, screening is carried out to discard the combinations that are meaningless. For example, phase change phenomena need the co-presence of the energy transfer phenomena. Feasible combinations of phenomena are then assigned to the functions they execute. A combination may accomplish a single or multiple functions, i.e. it might perform reaction or reaction with separation. The combinations are then allotted to the functions needed for the process. This creates a series of options for carrying out each function. Combination of these options for different functions in the process leads to the generation of superstructure of process options. These process options, which are formed by a list of phenomena for each function, are passed to the model generation algorithm in the form of binaries (1, 0). The algorithm gathers the active phenomena and couples them to generate the model. A series of models is generated for the functions, which are combined to get the process model. The most promising process options are then chosen subjected to a performance criterion, for example purity of product, or via a multi-objective Pareto optimisation. The methodology was applied to a two-step process and the best route was determined based on the higher product yield. The current methodology can identify, produce and evaluate process intensification options from which the optimal process can be determined. It can be applied to any chemical/biochemical process because of its generic nature.Keywords: Phenomena, Process intensification, Process models , Process options
Procedia PDF Downloads 234469 Queer Anti-Urbanism: An Exploration of Queer Space Through Design
Authors: William Creighton, Jan Smitheram
Abstract:
Queer discourse has been tied to a middle-class, urban-centric, white approach to the discussion of queerness. In doing so, the multilayeredness of queer existence has been washed away in favour of palatable queer occupation. This paper uses design to explore a queer anti-urbanist approach to facilitate a more egalitarian architectural occupancy. Scott Herring’s work on queer anti-urbanism is key to this approach. Herring redeploys anti-urbanism from its historical understanding of open hostility, rejection and desire to destroy the city towards a mode of queer critique that counters normative ideals of homonormative metronormative gay lifestyles. He questions how queer identity has been closed down into a more diminutive frame where those who do not fit within this frame are subjected to persecution or silenced through their absence. We extend these ideas through design to ask how a queer anti-urbanist approach facilitates a more egalitarian architectural occupancy. Following a “design as research” methodology, the design outputs allow a vehicle to ask how we might live, otherwise, in architectural space. A design as research methodologically is a process of questioning, designing and reflecting – in a non-linear, iterative approach – establishes itself through three projects, each increasing in scale and complexity. Each of the three scales tackled a different body relationship. The project began exploring the relations between body to body, body to known others, and body to unknown others. Moving through increasing scales was not to privilege the objective, the public and the large scale; instead, ‘intra-scaling’ acts as a tool to re-think how scale reproduces normative ideas of the identity of space. There was a queering of scale. Through this approach, the results were an installation that brings two people together to co-author space where the installation distorts the sensory experience and forces a more intimate and interconnected experience challenging our socialized proxemics: knees might touch. To queer the home, the installation was used as a drawing device, a tool to study and challenge spatial perception, drawing convention, and as a way to process practical information about the site and existing house – the device became a tool to embrace the spontaneous. The final design proposal operates as a multi-scalar boundary-crossing through “private” and “public” to support kinship through communal labour, queer relationality and mooring. The resulting design works to set adrift bodies in a sea of sensations through a mix of pleasure programmes. To conclude, through three design proposals, this design research creates a relationship between queer anti-urbanism and design. It asserts that queering the design process and outcome allows a more inclusive way to consider place, space and belonging. The projects lend to a queer relationality and interdependence by making spaces that support the unsettled, out-of-place, but is it queer enough?Keywords: queer, queer anti-urbanism, design as research, design
Procedia PDF Downloads 178468 Comparative Analysis of Simulation-Based and Mixed-Integer Linear Programming Approaches for Optimizing Building Modernization Pathways Towards Decarbonization
Authors: Nico Fuchs, Fabian Wüllhorst, Laura Maier, Dirk Müller
Abstract:
The decarbonization of building stocks necessitates the modernization of existing buildings. Key measures for this include reducing energy demands through insulation of the building envelope, replacing heat generators, and installing solar systems. Given limited financial resources, it is impractical to modernize all buildings in a portfolio simultaneously; instead, prioritization of buildings and modernization measures for a given planning horizon is essential. Optimization models for modernization pathways can assist portfolio managers in this prioritization. However, modeling and solving these large-scale optimization problems, often represented as mixed-integer problems (MIP), necessitates simplifying the operation of building energy systems particularly with respect to system dynamics and transient behavior. This raises the question of which level of simplification remains sufficient to accurately account for realistic costs and emissions of building energy systems, ensuring a fair comparison of different modernization measures. This study addresses this issue by comparing a two-stage simulation-based optimization approach with a single-stage mathematical optimization in a mixed-integer linear programming (MILP) formulation. The simulation-based approach serves as a benchmark for realistic energy system operation but requires a restriction of the solution space to discrete choices of modernization measures, such as the sizing of heating systems. After calculating the operation of different energy systems in terms of the resulting final energy demands in simulation models on a first stage, the results serve as input for a second stage MILP optimization, where the design of each building in the portfolio is optimized. In contrast to the simulation-based approach, the MILP-based approach can capture a broader variety of modernization measures due to the efficiency of MILP solvers but necessitates simplifying the building energy system operation. Both approaches are employed to determine the cost-optimal design and dimensioning of several buildings in a portfolio to meet climate targets within limited yearly budgets, resulting in a modernization pathway for the entire portfolio. The comparison reveals that the MILP formulation successfully captures design decisions of building energy systems, such as the selection of heating systems and the modernization of building envelopes. However, the results regarding the optimal dimensioning of heating technologies differ from the results of the two-stage simulation-based approach, as the MILP model tends to overestimate operational efficiency, highlighting the limitations of the MILP approach.Keywords: building energy system optimization, model accuracy in optimization, modernization pathways, building stock decarbonization
Procedia PDF Downloads 36467 Reactivities of Turkish Lignites during Oxygen Enriched Combustion
Authors: Ozlem Uguz, Ali Demirci, Hanzade Haykiri-Acma, Serdar Yaman
Abstract:
Lignitic coal holds its position as Turkey’s most important indigenous energy source to generate energy in thermal power plants. Hence, efficient and environmental-friendly use of lignite in electricity generation is of great importance. Thus, clean coal technologies have been planned to mitigate emissions and provide more efficient burning in power plants. In this context, oxygen enriched combustion (oxy-combustion) is regarded as one of the clean coal technologies, which based on burning with oxygen concentrations higher than that in air. As it is known that the most of the Turkish coals are low rank with high mineral matter content, unburnt carbon trapped in ash is, unfortunately, high, and it leads significant losses in the overall efficiencies of the thermal plants. Besides, the necessity of burning huge amounts of these low calorific value lignites to get the desired amount of energy also results in the formation of large amounts of ash that is rich in unburnt carbon. Oxygen enriched combustion technology enables to increase the burning efficiency through the complete burning of almost all of the carbon content of the fuel. This also contributes to the protection of air quality and emission levels drop reasonably. The aim of this study is to investigate the unburnt carbon content and the burning reactivities of several different lignite samples under oxygen enriched conditions. For this reason, the combined effects of temperature and oxygen/nitrogen ratios in the burning atmosphere were investigated and interpreted. To do this, Turkish lignite samples from Adıyaman-Gölbaşı and Kütahya-Tunçbilek regions were characterized first by proximate and ultimate analyses and the burning profiles were derived using DTA (Differential Thermal Analysis) curves. Then, these lignites were subjected to slow burning process in a horizontal tube furnace at different temperatures (200ºC, 400ºC, 600ºC for Adıyaman-Gölbaşı lignite and 200ºC, 450ºC, 800ºC for Kütahya-Tunçbilek lignite) under atmospheres having O₂+N₂ proportions of 21%O₂+79%N₂, 30%O₂+70%N₂, 40%O₂+60%N₂, and 50%O₂+50%N₂. These burning temperatures were specified based on the burning profiles derived from the DTA curves. The residues obtained from these burning tests were also analyzed by proximate and ultimate analyses to detect the unburnt carbon content along with the unused energy potential. Reactivity of these lignites was calculated using several methodologies. Burning yield under air condition (21%O₂+79%N₂) was used a benchmark value to compare the effectiveness of oxygen enriched conditions. It was concluded that oxygen enriched combustion method enhanced the combustion efficiency and lowered the unburnt carbon content of ash. Combustion of low-rank coals under oxygen enriched conditions was found to be a promising way to improve the efficiency of the lignite-firing energy systems. However, cost-benefit analysis should be considered for a better justification of this method since the use of more oxygen brings an unignorable additional cost.Keywords: coal, energy, oxygen enriched combustion, reactivity
Procedia PDF Downloads 275466 Immobilization of β-Galactosidase from Kluyveromyces Lactis on Polyethylenimine-Agarose for Production of Lactulose
Authors: Carlos A. C. G. Neto, Natan C. G. Silva, Thais O. Costa, Luciana R. B. Goncalves, Maria v. P. Rocha
Abstract:
Galactosidases are enzymes responsible for catalyzing lactose hydrolysis reactions and also favoring transgalactosylation reactions for the production of prebiotics, among which lactulose stands out. These enzymes, when immobilized, can have some enzymatic characteristics substantially improved, and the coating of supports with multifunctional polymers in immobilization processes is a promising alternative in order to extend the useful life of the biocatalysts, for example, the coating with polyethyleneimine (PEI). PEI is a flexible polymer that suits the structure of the enzyme, giving greater stability, especially for multimeric enzymes such as β-galactosidases and also protects it from environmental variations, for example, pH and temperature. In addition, it can substantially improve the immobilization parameters and also the efficiency of enzymatic reactions. In this context, the aim of the present work was first to develop biocatalysts of β-galactosidase from Kluyveromyces lactis immobilized on PEI coated agarose, determining the immobilization parameters, its operational and thermal stability, and then to apply it in the hydrolysis of lactose and synthesis of lactulose, using whey as a substrate. This immobilization strategy was chosen in order to improve the catalytic efficiency of the enzyme in the transgalactosylation reaction for the production of prebiotics, and there are few studies with β-galactosidase from this strain. The immobilization of β-galactosidase in agarose previously functionalized with 48% (w/v) glycidol and then coated with 10% (w/v) PEI solution was evaluated using an enzymatic load of 10 mg/g of protein. Subsequently, the hydrolysis and transgalactosylation reactions were conducted at 50 °C, 120 RPM for 20 minutes, using whey (66.7 g/L of lactose) supplemented with 133.3 g/L fructose at a ratio of 1:2 (lactose/fructose). Operational stability studies were performed in the same conditions for 10 cycles. Thermal stabilities of biocatalysts were conducted at 50 ºC in 50 mM phosphate buffer, pH 6.6, with 0.1 mM MnCl2. The biocatalysts whose supports were coated were named AGA_GLY_PEI_GAL, and those that were not coated were named AGA_GLY_GAL. The coating of the support with PEI considerably improved immobilization yield (2.6-fold), the biocatalyst activity (1.4-fold), and efficiency (2.2-fold). The biocatalyst AGA_GLY_PEI_GAL was better than AGA_GLY_GAL in hydrolysis and transgalactosylation reactions, converting 88.92% of lactose at 5 min of reaction and obtaining a residual concentration of 5.24 g/L. Besides that, it was produced 13.90 g/L lactulose in the same time interval. AGA_GLY_PEI_GAL biocatalyst was stable during the 10 cycles evaluated, converting approximately 80% of lactose and producing 10.95 g/L of lactulose even after the tenth cycle. However, the thermal stability of AGA_GLY_GAL biocatalyst was superior, with a half-life time 5 times higher, probably because the enzyme was immobilized by covalent bonding, which is stronger than adsorption (AGA_GLY_PEI_GAL). Therefore, the strategy of coating the supports with PEI has proven to be effective for the immobilization of β-galactosidase from K. lactis, considerably improving the immobilization parameters, as well as the enzyme, catalyzed reactions. In addition, the use of whey as a raw material for lactulose production has proved to be an industrially advantageous alternative.Keywords: β-galactosidase, immobilization, lactulose, polyethylenimine, whey
Procedia PDF Downloads 119465 Potential Impacts of Climate Change on Hydrological Droughts in the Limpopo River Basin
Authors: Nokwethaba Makhanya, Babatunde J. Abiodun, Piotr Wolski
Abstract:
Climate change possibly intensifies hydrological droughts and reduces water availability in river basins. Despite this, most research on climate change effects in southern Africa has focused exclusively on meteorological droughts. This thesis projects the potential impact of climate change on the future characteristics of hydrological droughts in the Limpopo River Basin (LRB). The study uses regional climate model (RCM) measurements (from the Coordinated Regional Climate Downscaling Experiment, CORDEX) and a combination of hydrological simulations (using the Soil and Water Assessment Tool Plus model, SWAT+) to predict the impacts at four global warming levels (GWLs: 1.5℃, 2.0℃, 2.5℃, and 3.0℃) under the RCP8.5 future climate scenario. The SWAT+ model was calibrated and validated with a streamflow dataset observed over the basin, and the sensitivity of model parameters was investigated. The performance of the SWAT+LRB model was verified using the Nash-Sutcliffe efficiency (NSE), Percent Bias (PBIAS), Root Mean Square Error (RMSE), and coefficient of determination (R²). The Standardized Precipitation Evapotranspiration Index (SPEI) and the Standardized Precipitation Index (SPI) have been used to detect meteorological droughts. The Soil Water Index (SSI) has been used to define agricultural drought, while the Water Yield Drought Index (WYLDI), the Surface Run-off Index (SRI), and the Streamflow Index (SFI) have been used to characterise hydrological drought. The performance of the SWAT+ model simulations over LRB is sensitive to the parameters CN2 (initial SCS runoff curve number for moisture condition II) and ESCO (soil evaporation compensation factor). The best simulation generally performed better during the calibration period than the validation period. In calibration and validation periods, NSE is ≤ 0.8, while PBIAS is ≥ ﹣80.3%, RMSE ≥ 11.2 m³/s, and R² ≤ 0.9. The simulations project a future increase in temperature and potential evapotranspiration over the basin, but they do not project a significant future trend in precipitation and hydrological variables. However, the spatial distribution of precipitation reveals a projected increase in precipitation in the southern part of the basin and a decline in the northern part of the basin, with the region of reduced precipitation projected to increase with GWLs. A decrease in all hydrological variables is projected over most parts of the basin, especially over the eastern part of the basin. The simulations predict meteorological droughts (i.e., SPEI and SPI), agricultural droughts (i.e., SSI), and hydrological droughts (i.e., WYLDI, SRI) would become more intense and severe across the basin. SPEI-drought has a greater magnitude of increase than SPI-drought, and agricultural and hydrological droughts have a magnitude of increase between the two. As a result, this research suggests that future hydrological droughts over the LRB could be more severe than the SPI-drought projection predicts but less severe than the SPEI-drought projection. This research can be used to mitigate the effects of potential climate change on basin hydrological drought.Keywords: climate change, CORDEX, drought, hydrological modelling, Limpopo River Basin
Procedia PDF Downloads 129464 Novel Framework for MIMO-Enhanced Robust Selection of Critical Control Factors in Auto Plastic Injection Moulding Quality Optimization
Authors: Seyed Esmail Seyedi Bariran, Khairul Salleh Mohamed Sahari
Abstract:
Apparent quality defects such as warpage, shrinkage, weld line, etc. are such an irresistible phenomenon in mass production of auto plastic appearance parts. These frequently occurred manufacturing defects should be satisfied concurrently so as to achieve a final product with acceptable quality standards. Determining the significant control factors that simultaneously affect multiple quality characteristics can significantly improve the optimization results by eliminating the deviating effect of the so-called ineffective outliers. Hence, a robust quantitative approach needs to be developed upon which major control factors and their level can be effectively determined to help improve the reliability of the optimal processing parameter design. Hence, the primary objective of current study was to develop a systematic methodology for selection of significant control factors (SCF) relevant to multiple quality optimization of auto plastic appearance part. Auto bumper was used as a specimen with the most identical quality and production characteristics to APAP group. A preliminary failure modes and effect analysis (FMEA) was conducted to nominate a database of pseudo significant significant control factors prior to the optimization phase. Later, CAE simulation Moldflow analysis was implemented to manipulate four rampant plastic injection quality defects concerned with APAP group including warpage deflection, volumetric shrinkage, sink mark and weld line. Furthermore, a step-backward elimination searching method (SESME) has been developed for systematic pre-optimization selection of SCF based on hierarchical orthogonal array design and priority-based one-way analysis of variance (ANOVA). The development of robust parameter design in the second phase was based on DOE module powered by Minitab v.16 statistical software. Based on the F-test (F 0.05, 2, 14) one-way ANOVA results, it was concluded that for warpage deflection, material mixture percentage was the most significant control factor yielding a 58.34% of contribution while for the other three quality defects, melt temperature was the most significant control factor with a 25.32%, 84.25%, and 34.57% contribution for sin mark, shrinkage and weld line strength control. Also, the results on the he least significant control factors meaningfully revealed injection fill time as the least significant factor for both warpage and sink mark with respective 1.69% and 6.12% contribution. On the other hand, for shrinkage and weld line defects, the least significant control factors were holding pressure and mold temperature with a 0.23% and 4.05% overall contribution accordingly.Keywords: plastic injection moulding, quality optimization, FMEA, ANOVA, SESME, APAP
Procedia PDF Downloads 349463 Continuous and Discontinuos Modeling of Wellbore Instability in Anisotropic Rocks
Authors: C. Deangeli, P. Obentaku Obenebot, O. Omwanghe
Abstract:
The study focuses on the analysis of wellbore instability in rock masses affected by weakness planes. The occurrence of failure in such a type of rocks can occur in the rock matrix and/ or along the weakness planes, in relation to the mud weight gradient. In this case the simple Kirsch solution coupled with a failure criterion cannot supply a suitable scenario for borehole instabilities. Two different numerical approaches have been used in order to investigate the onset of local failure at the wall of a borehole. For each type of approach the influence of the inclination of weakness planes has been investigates, by considering joint sets at 0°, 35° and 90° to the horizontal. The first set of models have been carried out with FLAC 2D (Fast Lagrangian Analysis of Continua) by considering the rock material as a continuous medium, with a Mohr Coulomb criterion for the rock matrix and using the ubiquitous joint model for accounting for the presence of the weakness planes. In this model yield may occur in either the solid or along the weak plane, or both, depending on the stress state, the orientation of the weak plane and the material properties of the solid and weak plane. The second set of models have been performed with PFC2D (Particle Flow code). This code is based on the Discrete Element Method and considers the rock material as an assembly of grains bonded by cement-like materials, and pore spaces. The presence of weakness planes is simulated by the degradation of the bonds between grains along given directions. In general the results of the two approaches are in agreement. However the discrete approach seems to capture more complex phenomena related to local failure in the form of grain detachment at wall of the borehole. In fact the presence of weakness planes in the discontinuous medium leads to local instability along the weak planes also in conditions not predicted from the continuous solution. In general slip failure locations and directions do not follow the conventional wellbore breakout direction but depend upon the internal friction angle and the orientation of the bedding planes. When weakness plane is at 0° and 90° the behaviour are similar to that of a continuous rock material, but borehole instability is more severe when weakness planes are inclined at an angle between 0° and 90° to the horizontal. In conclusion, the results of the numerical simulations show that the prediction of local failure at the wall of the wellbore cannot disregard the presence of weakness planes and consequently the higher mud weight required for stability for any specific inclination of the joints. Despite the discrete approach can simulate smaller areas because of the large number of particles required for the generation of the rock material, however it seems to investigate more correctly the occurrence of failure at the miscroscale and eventually the propagation of the failed zone to a large portion of rock around the wellbore.Keywords: continuous- discontinuous, numerical modelling, weakness planes wellbore, FLAC 2D
Procedia PDF Downloads 501462 The Role of Macroeconomic Condition and Volatility in Credit Risk: An Empirical Analysis of Credit Default Swap Index Spread on Structural Models in U.S. Market during Post-Crisis Period
Authors: Xu Wang
Abstract:
This research builds linear regressions of U.S. macroeconomic condition and volatility measures in the investment grade and high yield Credit Default Swap index spreads using monthly data from March 2009 to July 2016, to study the relationship between different dimensions of macroeconomy and overall credit risk quality. The most significant contribution of this research is systematically examining individual and joint effects of macroeconomic condition and volatility on CDX spreads by including macroeconomic time series that captures different dimensions of the U.S. economy. The industrial production index growth, non-farm payroll growth, consumer price index growth, 3-month treasury rate and consumer sentiment are introduced to capture the condition of real economic activity, employment, inflation, monetary policy and risk aversion respectively. The conditional variance of the macroeconomic series is constructed using ARMA-GARCH model and is used to measure macroeconomic volatility. The linear regression model is conducted to capture relationships between monthly average CDX spreads and macroeconomic variables. The Newey–West estimator is used to control for autocorrelation and heteroskedasticity in error terms. Furthermore, the sensitivity factor analysis and standardized coefficients analysis are conducted to compare the sensitivity of CDX spreads to different macroeconomic variables and to compare relative effects of macroeconomic condition versus macroeconomic uncertainty respectively. This research shows that macroeconomic condition can have a negative effect on CDX spread while macroeconomic volatility has a positive effect on determining CDX spread. Macroeconomic condition and volatility variables can jointly explain more than 70% of the whole variation of the CDX spread. In addition, sensitivity factor analysis shows that the CDX spread is the most sensitive to Consumer Sentiment index. Finally, the standardized coefficients analysis shows that both macroeconomic condition and volatility variables are important in determining CDX spread but macroeconomic condition category of variables have more relative importance in determining CDX spread than macroeconomic volatility category of variables. This research shows that the CDX spread can reflect the individual and joint effects of macroeconomic condition and volatility, which suggests that individual investors or government should carefully regard CDX spread as a measure of overall credit risk because the CDX spread is influenced by macroeconomy. In addition, the significance of macroeconomic condition and volatility variables, such as Non-farm Payroll growth rate and Industrial Production Index growth volatility suggests that the government, should pay more attention to the overall credit quality in the market when macroecnomy is low or volatile.Keywords: autoregressive moving average model, credit spread puzzle, credit default swap spread, generalized autoregressive conditional heteroskedasticity model, macroeconomic conditions, macroeconomic uncertainty
Procedia PDF Downloads 167461 An Integrated Theoretical Framework on Mobile-Assisted Language Learning: User’s Acceptance Behavior
Authors: Gyoomi Kim, Jiyoung Bae
Abstract:
In the field of language education research, there are not many tries to empirically examine learners’ acceptance behavior and related factors of mobile-assisted language learning (MALL). This study is one of the few attempts to propose an integrated theoretical framework that explains MALL users’ acceptance behavior and potential factors. Constructs from technology acceptance model (TAM) and MALL research are tested in the integrated framework. Based on previous studies, a hypothetical model was developed. Four external variables related to the MALL user’s acceptance behavior were selected: subjective norm, content reliability, interactivity, self-regulation. The model was also composed of four other constructs: two latent variables, perceived ease of use and perceived usefulness, were considered as cognitive constructs; attitude toward MALL as an affective construct; behavioral intention to use MALL as a behavioral construct. The participants were 438 undergraduate students who enrolled in an intensive English program at one university in Korea. This particular program was held in January 2018 using the vacation period. The students were given eight hours of English classes each day from Monday to Friday for four weeks and asked to complete MALL courses for practice outside the classroom. Therefore, all participants experienced blended MALL environment. The instrument was a self-response questionnaire, and each construct was measured by five questions. Once the questionnaire was developed, it was distributed to the participants at the final ceremony of the intensive program in order to collect the data from a large number of the participants at a time. The data showed significant evidence to support the hypothetical model. The results confirmed through structural equation modeling analysis are as follows: First, four external variables such as subjective norm, content reliability, interactivity, and self-regulation significantly affected perceived ease of use. Second, subjective norm, content reliability, self-regulation, perceived ease of use significantly affected perceived usefulness. Third, perceived usefulness and perceived ease of use significantly affected attitude toward MALL. Fourth, attitude toward MALL and perceived usefulness significantly affected behavioral intention to use MALL. These results implied that the integrated framework from TAM and MALL could be useful when adopting MALL environment to university students or adult English learners. Key constructs except interactivity showed significant relationships with one another and had direct and indirect impacts on MALL user’s acceptance behavior. Therefore, the constructs and validated metrics is valuable for language researchers and educators who are interested in MALL.Keywords: blended MALL, learner factors/variables, mobile-assisted language learning, MALL, technology acceptance model, TAM, theoretical framework
Procedia PDF Downloads 240460 Investigating Educator Perceptions of Body-Rich Language on Student Self-Image, Body-Consciousness and School Climate
Authors: Evelyn Bilias-Lolis, Emily Louise Winter
Abstract:
Schools have a responsibility to implement school-wide frameworks that actively prevent, detect, and support all aspects of child development and learning. Such efforts can range from individual or classroom-level supports to school-wide primary prevention practices for the school’s infrastructure or climate. This study assessed the perceptions of educators across a variety of disciplines in Connecticut (i.e., elementary and secondary education, special education, school psychology, and school social work) on the perceived impact of their beliefs, language, and behavior about food and body consciousness on student self-image and school climate. Participants (N=50) completed a short electronic questionnaire measuring perceptions of how their behavior can influence their students’ opinions about themselves, their emerging self-image, and the overall climate of the school community. Secondly, the beliefs that were directly assessed in the first portion of the survey were further measured through the use of applied social vignettes involving students directly or as bystanders. Preliminary findings are intriguing. When asked directly, 100% of the respondents reported that what they say to students directly could influence student opinions about themselves and 98% of participants further agreed that their behavior both to and in front of students could impact a student’s developing self-image. Likewise, 82% of the sample agreed that their personal language and behavior affect the overall climate of a school building. However, when the above beliefs were assessed via applied social vignettes depicting routine social exchanges, results were significantly more widespread (i.e., results were evenly dispersed among levels of agreement and disagreement across participants in all areas). These preliminary findings offer humble but critical implications for informing integrated school wellness frameworks that aim to create body-sensitive school communities. Research indicates that perceptions about body image, attitudes about eating, and the onset of disordered eating practices surface in school-aged years. Schools provide a natural setting for instilling foundations for child wellness as a natural extension of existing school climate reform efforts. These measures do not always need to be expansive or extreme. Rather, educators have a ripe opportunity to become champions for health and wellness through increased self-awareness and subtle shifts in language and behavior. Future psychological research needs to continue to explore this line of inquiry using larger and more varied samples of educators in order to identify needs in teacher training and development that can yield positive and preventative health outcomes for children.Keywords: body-sensitive schools, integrated school health, school climate reform, teacher awareness
Procedia PDF Downloads 158459 Electrohydrodynamic Patterning for Surface Enhanced Raman Scattering for Point-of-Care Diagnostics
Authors: J. J. Rickard, A. Belli, P. Goldberg Oppenheimer
Abstract:
Medical diagnostics, environmental monitoring, homeland security and forensics increasingly demand specific and field-deployable analytical technologies for quick point-of-care diagnostics. Although technological advancements have made optical methods well-suited for miniaturization, a highly-sensitive detection technique for minute sample volumes is required. Raman spectroscopy is a well-known analytical tool, but has very weak signals and hence is unsuitable for trace level analysis. Enhancement via localized optical fields (surface plasmons resonances) on nanoscale metallic materials generates huge signals in surface-enhanced Raman scattering (SERS), enabling single molecule detection. This enhancement can be tuned by manipulation of the surface roughness and architecture at the sub-micron level. Nevertheless, the development and application of SERS has been inhibited by the irreproducibility and complexity of fabrication routes. The ability to generate straightforward, cost-effective, multiplex-able and addressable SERS substrates with high enhancements is of profound interest for SERS-based sensing devices. While most SERS substrates are manufactured by conventional lithographic methods, the development of a cost-effective approach to create nanostructured surfaces is a much sought-after goal in the SERS community. Here, a method is established to create controlled, self-organized, hierarchical nanostructures using electrohydrodynamic (HEHD) instabilities. The created structures are readily fine-tuned, which is an important requirement for optimizing SERS to obtain the highest enhancements. HEHD pattern formation enables the fabrication of multiscale 3D structured arrays as SERS-active platforms. Importantly, each of the HEHD-patterned individual structural units yield a considerable SERS enhancement. This enables each single unit to function as an isolated sensor. Each of the formed structures can be effectively tuned and tailored to provide high SERS enhancement, while arising from different HEHD morphologies. The HEHD fabrication of sub-micrometer architectures is straightforward and robust, providing an elegant route for high-throughput biological and chemical sensing. The superior detection properties and the ability to fabricate SERS substrates on the miniaturized scale, will facilitate the development of advanced and novel opto-fluidic devices, such as portable detection systems, and will offer numerous applications in biomedical diagnostics, forensics, ecological warfare and homeland security.Keywords: hierarchical electrohydrodynamic patterning, medical diagnostics, point-of care devices, SERS
Procedia PDF Downloads 347458 Microalgae Technology for Nutraceuticals
Authors: Weixing Tan
Abstract:
Production of nutraceuticals from microalgae—a virtually untapped natural phyto-based source of which there are 200,000 to 1,000,000 species—offers a sustainable and healthy alternative to conventionally sourced nutraceuticals for the market. Microalgae can be grown organically using only natural sunlight, water and nutrients at an extremely fast rate, e.g. 10-100 times more efficiently than crops or trees. However, the commercial success of microalgae products at scale remains limited largely due to the lack of economically viable technologies. There are two major microalgae production systems or technologies currently available: 1) the open system as represented by open pond technology and 2) the closed system such as photobioreactors (PBR). Each carries its own unique features and challenges. Although an open system requires a lower initial capital investment relative to a PBR, it conveys many unavoidable drawbacks; for example, much lower productivity, difficulty in contamination control/cleaning, inconsistent product quality, inconvenience in automation, restriction in location selection, and unsuitability for cold areas – all directly linked to the system openness and flat underground design. On the other hand, a PBR system has characteristics almost entirely opposite to the open system, such as higher initial capital investment, better productivity, better contamination and environmental control, wider suitability in different climates, ease in automation, higher and consistent product quality, higher energy demand (particularly if using artificial lights), and variable operational expenses if not automated. Although closed systems like PBRs are not highly competitive yet in current nutraceutical supply market, technological advances can be made, in particular for the PBR technology, to narrow the gap significantly. One example is a readily scalable P2P Microalgae PBR Technology at Grande Prairie Regional College, Canada, developed over 11 years considering return on investment (ROI) for key production processes. The P2P PBR system is approaching economic viability at a pre-commercial stage due to five ROI-integrated major components. They include: (1) optimum use of free sunlight through attenuation (patented); (2) simple, economical, and chemical-free harvesting (patent ready to file); (3) optimum pH- and nutrient-balanced culture medium (published), (4) reliable water and nutrient recycling system (trade secret); and (5) low-cost automated system design (trade secret). These innovations have allowed P2P Microalgae Technology to increase daily yield to 106 g/m2/day of Chlorella vulgaris, which contains 50% proteins and 2-3% omega-3. Based on the current market prices and scale-up factors, this P2P PBR system presents as a promising microalgae technology for market competitive nutraceutical supply.Keywords: microalgae technology, nutraceuticals, open pond, photobioreactor PBR, return on investment ROI, technological advances
Procedia PDF Downloads 157457 Phytochemical Composition, Antimicrobial Potential and Antioxidant Activity of Peganum harmala L. Extracts
Authors: Narayana Bhat, Majda Khalil, Hamad Al-Mansour, Anitha Manuvel, Vimla Yeddu
Abstract:
The aim of this study was to assess the antimicrobial and antioxidant potential and phytochemical composition of Peganum harmala L. For this purpose, powdered shoot, root, and seed samples were extracted in an accelerated solvent extractor (ASE) with methanol, ethanol, acetone, and dichloromethane. The residues were reconstituted in the above solvents and 10% dimethyl sulphoxide (DMSO). The antimicrobial activity of these extracts was tested against two bacterial (Escherichia coli E49 and Staphylococcus aureus CCUG 43507) and two fungi Candida albicans ATCC 24433, Candida glabrata ATCC 15545) strains using the well-diffusion method. The minimum inhibitory concentration (MIC) and growth pattern of these test strains were determined using microbroth dilution method, and the phospholipase assay was performed to detect tissue damage in the host cells. Results revealed that ethanolic, methanolic, and dichloromethane extracts of seeds exhibited significant antimicrobial activities against all tested strains, whereas the acetone extract of seeds was effective against E. coli only. Similarly, ethanolic and methanolic extracts of roots were effective against two bacterial strains only. One sixth of percent (0.6%) yield of methanol extract of seeds was found to be the MIC for Escherichia coli E49, Staphylococcus aureus CCUG 43507, and Candida glabrata ATCC 15545. Overall, seed extracts had greater antimicrobial activities compared to roots and shoot extracts. The original plant extract and MIC dilutions prevented phospholipase secretion in Staphylococcus aureus CCUG 43507 and Candida albicans ATCC 24433. The 1,1-diphenyl-2-picrylhydrazyl (DPPH) radical scavenging assay revealed radical scavenging activities ranging from 71.80 ± 4.36% to 87.75 ± 1.70%. The main compound present in the root extract was 1-methyl-7-methoxy-beta-carboline (RT: 44.171), followed by norlapachol (3.62%), benzopyrazine (2.20%), palmitic acid (2.12%) and vasicinone (1.96%). In contrast, phenol,4-ethenyl-2-methoxy was in abundance in the methonolic extract of the shoot, whereas 1-methyl-7-methoxy-beta-carboline (79.59%), linoleic acid (9.05%), delta-tocopherol (5.02%), 9,12-octadecadienoic acid, methyl ester (2.65%), benzene, 1,1-1,2 ethanediyl bis 3,4dimethyl (1.15%), anthraquinone (0.58%), hexadecanoic acid, methyl ester (0.54%), palmitic acid (0.35%) and methyl stearate (0.18%) were present in the methanol extract of seeds. Major findings of this study, along with their relevance to developing effective, safe drugs, will be discussed in this presentation.Keywords: medicinal plants, secondary metabolites, phytochemical screening, bioprospecting, radical scavenging
Procedia PDF Downloads 178456 A Finite Element Analysis of Hexagonal Double-Arrowhead Auxetic Structure with Enhanced Energy Absorption Characteristics and Stiffness
Abstract:
Auxetic materials, as an emerging artificial designed metamaterial has attracted growing attention due to their promising negative Poisson’s ratio behaviors and tunable properties. The conventional auxetic lattice structures for which the deformation process is governed by a bending-dominated mechanism have faced the limitation of poor mechanical performance for many potential engineering applications. Recently, both load-bearing and energy absorption capabilities have become a crucial consideration in auxetic structure design. This study reports the finite element analysis of a class of hexagonal double-arrowhead auxetic structures with enhanced stiffness and energy absorption performance. The structure design was developed by extending the traditional double-arrowhead honeycomb to a hexagon frame, the stretching-dominated deformation mechanism was determined according to Maxwell’s stability criterion. The finite element (FE) models of 2D lattice structures established with stainless steel material were analyzed in ABAQUS/Standard for predicting in-plane structural deformation mechanism, failure process, and compressive elastic properties. Based on the computational simulation, the parametric analysis was studied to investigate the effect of the structural parameters on Poisson’s ratio and mechanical properties. The geometrical optimization was then implemented to achieve the optimal Poisson’s ratio for the maximum specific energy absorption. In addition, the optimized 2D lattice structure was correspondingly converted into a 3D geometry configuration by using the orthogonally splicing method. The numerical results of 2D and 3D structures under compressive quasi-static loading conditions were compared separately with the traditional double-arrowhead re-entrant honeycomb in terms of specific Young's moduli, Poisson's ratios, and specified energy absorption. As a result, the energy absorption capability and stiffness are significantly reinforced with a wide range of Poisson’s ratio compared to traditional double-arrowhead re-entrant honeycomb. The auxetic behaviors, energy absorption capability, and yield strength of the proposed structure are adjustable with different combinations of joint angle, struts thickness, and the length-width ratio of the representative unit cell. The numerical prediction in this study suggests the proposed concept of hexagonal double-arrowhead structure could be a suitable candidate for the energy absorption applications with a constant request of load-bearing capacity. For future research, experimental analysis is required for the validation of the numerical simulation.Keywords: auxetic, energy absorption capacity, finite element analysis, negative Poisson's ratio, re-entrant hexagonal honeycomb
Procedia PDF Downloads 88455 Acrylamide Concentration in Cakes with Different Caloric Sweeteners
Authors: L. García, N. Cobas, M. López
Abstract:
Acrylamide, a probable carcinogen, is formed in high-temperature processed food (>120ºC) when the free amino acid asparagine reacts with reducing sugars, mainly glucose and fructose. Cane juices' repeated heating would potentially form acrylamide during brown sugar production. This study aims to determine if using panela in yogurt cake preparation increases acrylamide formation. A secondary aim is to analyze the acrylamide concentration in four cake confections with different caloric sweetener ingredients: beet sugar (BS), cane sugar (CS), panela (P), and a panela and chocolate mix (PC). The doughs were obtained by combining ingredients in a planetary mixer. A model system made up of flour (25%), caloric sweeteners (25 %), eggs (23%), yogurt (15.7%), sunflower oil (9.4%), and brewer's yeast (2 %) was applied to BS, CS and P cakes. The ingredients of PC cakes varied: flour (21.5 %), panela chocolate (21.5 %), eggs (25.9 %), yogurt (18 %), sunflower oil (10.8 %), and brewer’s yeast (2.3 %). The preparations were baked for 45' at 180 ºC. Moisture was estimated by AOAC. Protein was determined by the Kjeldahl method. Ash percentage was calculated by weight loss after pyrolysis (≈ 600 °C). Fat content was measured using liquid-solid extraction in hydrolyzed raw ingredients and final confections. Carbohydrates were determined by difference and total sugars by the Luff-Schoorl method, based on the iodometric determination of copper ions. Finally, acrylamide content was determined by LC-MS by the isocratic system (phase A: 97.5 % water with 0.1% formic acid; phase B: 2.5 % methanol), using a standard internal procedure. Statistical analysis was performed using SPSS v.23. One-way variance analysis determined differences between acrylamide content and compositional analysis, with caloric sweeteners as fixed effect. Significance levels were determined by applying Duncan's t-test (p<0.05). P cakes showed a lower energy value than the other baked products; sugar content was similar to BS and CS, with 6.1 % mean crude protein. Acrylamide content in caloric sweeteners was similar to previously reported values. However, P and PC showed significantly higher concentrations, probably explained by the applied procedure. Acrylamide formation depends on both reducing sugars and asparagine concentration and availability. Beet sugar samples did not present acrylamide concentrations within the detection and quantification limit. However, the highest acrylamide content was measured in the BS. This may be due to the higher concentration of reducing sugars and asparagine in other raw ingredients. The cakes made with panela, cane sugar, or panela with chocolate did not differ in acrylamide content. The lack of asparagine measures constitutes a limitation. Cakes made with panela showed lower acrylamide formation than products elaborated with beet or cane sugar.Keywords: beet sugar, cane sugar, panela, yogurt cake
Procedia PDF Downloads 66454 Development of International Entry-Level Nursing Competencies to Address the Continuum of Substance Use
Authors: Cheyenne Johnson, Samantha Robinson, Christina Chant, Ann M. Mitchell, Carol Price, Carmel Clancy, Adam Searby, Deborah S. Finnell
Abstract:
Introduction: Substance use along the continuum from at-risk use to a substance use disorder (SUD) contributes substantially to the burden of disease and related harms worldwide. There is a growing body of literature that highlights the lack of substance use related content in nursing curricula. Furthermore, there is also a lack of consensus on key competencies necessary for entry-level nurses. Globally, there is a lack of established nursing competencies related to prevention, health promotion, harm reduction and treatment of at-risk substance use and SUDs. At a critical time in public health, this gap in nursing curricula contributes to a lack of preparation for entry-level nurses to support people along the continuum of substance use. Thus, in practice, early opportunities for screening, support, and interventions may be missed. To address this gap, an international committee was convened to develop international entry-level nursing competencies specifying the knowledge, skills, and abilities that all nurses should possess in order to address the continuum of substance use. Methodology: An international steering committee, including representation from Canada, United States, United Kingdom, and Australia was established to lead this work over a one-year time period. The steering committee conducted a scoping review, undertaken to examine nursing competency frameworks, and to inform a competency structure that would guide this work. The next steps were to outline key competency areas and establish leaders for working groups to develop the competencies. In addition, a larger international committee was gathered to contribute to competency working groups, review the collective work and concur on the final document. Findings: A comprehensive framework was developed with competencies covering a wide spectrum of substance use across the lifespan and in the context of prevention, health promotion, harm reduction and treatment, including special populations. The development of this competency-based framework meets an identified need to provide guidance for universities, health authorities, policy makers, nursing regulators and other organizations that provide and support nursing education which focuses on care for patients and families with at-risk substance use and SUDs. Conclusion: Utilizing these global competencies as expected outcomes of an educational and skill building curricula for entry-level nurses holds great promise for incorporating evidence-informed training in the care and management of people across the continuum of substance use.Keywords: addiction nursing, addiction nursing curriculum, competencies, substance use
Procedia PDF Downloads 178453 Determinants of Quality of Life in Patients with Atypical Prarkinsonian Syndromes: 1-Year Follow-Up Study
Authors: Tatjana Pekmezovic, Milica Jecmenica-Lukic, Igor Petrovic, Vladimir Kostic
Abstract:
Background: A group of atypical parkinsonian syndromes (APS) includes a variety of rare neurodegenerative disorders characterized by reduced life expectancy, increasing disability, and considerable impact on health-related quality of life (HRQoL). Aim: In this study we wanted to answer two questions: a) which demographic and clinical factors are main contributors of HRQoL in our cohort of patients with APS, and b) how does quality of life of these patients change over 1-year follow-up period. Patients and Methods: We conducted a prospective cohort study in hospital settings. The initial study comprised all consecutive patients who were referred to the Department of Movement Disorders, Clinic of Neurology, Clinical Centre of Serbia, Faculty of Medicine, University of Belgrade (Serbia), from January 31, 2000 to July 31, 2013, with the initial diagnoses of ‘Parkinson’s disease’, ‘parkinsonism’, ‘atypical parkinsonism’ and ‘parkinsonism plus’ during the first 8 months from the appearance of first symptom(s). The patients were afterwards regularly followed in 4-6 month intervals and eventually the diagnoses were established for 46 patients fulfilling the criteria for clinically probable progressive supranuclear palsy (PSP) and 36 patients for probable multiple system atrophy (MSA). The health-related quality of life was assessed by using the SF-36 questionnaire (Serbian translation). Hierarchical multiple regression analysis was conducted to identify predictors of composite scores of SF-36. The importance of changes in quality of life scores of patients with APS between baseline and follow-up time-point were quantified using Wilcoxon Signed Ranks Test. The magnitude of any differences for the quality of life changes was calculated as an effect size (ES). Results: The final models of hierarchical regression analysis showed that apathy measured by the Apathy evaluation scale (AES) score accounted for 59% of the variance in the Physical Health Composite Score of SF-36 and 14% of the variance in the Mental Health Composite Score of SF-36 (p<0.01). The changes in HRQoL were assessed in 52 patients with APS who completed 1-year follow-up period. The analysis of magnitude for changes in HRQoL during one-year follow-up period have shown sustained medium ES (0.50-0.79) for both Physical and Mental health composite scores, total quality of life as well as for the Physical Health, Vitality, Role Emotional and Social Functioning. Conclusion: This study provides insight into new potential predictors of HRQoL and its changes over time in patients with APS. Additionally, identification of both prognostic markers of a poor HRQoL and magnitude of its changes should be considered when developing comprehensive treatment-related strategies and health care programs aimed at improving HRQoL and well-being in patients with APS.Keywords: atypical parkinsonian syndromes, follow-up study, quality of life, APS
Procedia PDF Downloads 307452 Magnetic SF (Silk Fibroin) E-Gel Scaffolds Containing bFGF-Conjugated Fe3O4 Nanoparticles
Authors: Z. Karahaliloğlu, E. Yalçın, M. Demirbilek, E.B. Denkbaş
Abstract:
Critical-sized bone defects caused by trauma, bone diseases, prosthetic implant revision or tumor excision cannot be repaired by physiological regenerative processes. Current orthopedic applications for critical-sized bone defects are to use autologous bone grafts, bone allografts, or synthetic graft materials. However, these strategies are unable to solve completely the problem, and motivate the development of novel effective biological scaffolds for tissue engineering applications and regenerative medicine applications. In particular, scaffolds combined with a variety of bio-agents as fundamental tools emerge to provide the regeneration of damaged bone tissues due to their ability to promote cell growth and function. In this study, a magnetic silk fibroin (SF) hydrogel scaffold was prepared by electrogelation process of the concentrated Bombxy mori silk fibroin (8 %wt) aqueous solution. For enhancement of osteoblast-like cells (SaOS-2) growth and adhesion, basal fibroblast growth factor (bFGF) were conjugated physically to the HSA-coated magnetic nanoparticles (Fe3O4) and magnetic SF e-gel scaffolds were prepared by incorporation of Fe3O4, HSA (human serum albumin)=Fe3O4 and HSA=Fe3O4-bFGF nanoparticles. HSA=Fe3O4, HSA=Fe3O4-bFGF loaded and bare SF e-gels scaffolds were characterized using scanning electron microscopy (SEM.) For cell studies, human osteoblast-like cell line (SaOS-2) was used and an MTT assay was used to assess the cytotoxicity of magnetic silk fibroin e-gel scaffolds and cell density on these surfaces. For the evaluation osteogenic activation, ALP (alkaline phosphatase), the amount of mineralized calcium, total protein and collagen were studied. Fe3O4 nanoparticles were successfully synthesized and bFGF was conjugated to HSA=Fe3O4 nanoparticles with %97.5 of binding yield which has a particle size of 71.52±2.3 nm. Electron microscopy images of the prepared HSA and bFGF incorporated SF e-gel scaffolds showed a 3D porous morphology. In terms of water uptake results, bFGF conjugated HSA=Fe3O4 nanoparticles has the best water absorbability behavior among all groups. In the in-vitro cell culture studies realized using SaOS-2 cell line, the coating of Fe3O4 nanoparticles surface with a protein enhance the cell viability and HSA coating and bFGF conjugation, the both have an inductive effect in the cell proliferation. One of the markers of bone formation and osteoblast differentiation, according to the ALP activity and total protein results, HSA=Fe3O4-bFGF loaded SF e-gels had significantly enhanced ALP activity. Osteoblast cultured HSA=Fe3O4-bFGF loaded SF e-gels deposited more calcium compared with SF e-gel. The proposed magnetic scaffolds seem to be promising for bone tissue regeneration and used in future work for various applications.Keywords: basic fibroblast growth factor (bFGF), e-gel, iron oxide nanoparticles, silk fibroin
Procedia PDF Downloads 290451 A Demonstration of How to Employ and Interpret Binary IRT Models Using the New IRT Procedure in SAS 9.4
Authors: Ryan A. Black, Stacey A. McCaffrey
Abstract:
Over the past few decades, great strides have been made towards improving the science in the measurement of psychological constructs. Item Response Theory (IRT) has been the foundation upon which statistical models have been derived to increase both precision and accuracy in psychological measurement. These models are now being used widely to develop and refine tests intended to measure an individual's level of academic achievement, aptitude, and intelligence. Recently, the field of clinical psychology has adopted IRT models to measure psychopathological phenomena such as depression, anxiety, and addiction. Because advances in IRT measurement models are being made so rapidly across various fields, it has become quite challenging for psychologists and other behavioral scientists to keep abreast of the most recent developments, much less learn how to employ and decide which models are the most appropriate to use in their line of work. In the same vein, IRT measurement models vary greatly in complexity in several interrelated ways including but not limited to the number of item-specific parameters estimated in a given model, the function which links the expected response and the predictor, response option formats, as well as dimensionality. As a result, inferior methods (a.k.a. Classical Test Theory methods) continue to be employed in efforts to measure psychological constructs, despite evidence showing that IRT methods yield more precise and accurate measurement. To increase the use of IRT methods, this study endeavors to provide a comprehensive overview of binary IRT models; that is, measurement models employed on test data consisting of binary response options (e.g., correct/incorrect, true/false, agree/disagree). Specifically, this study will cover the most basic binary IRT model, known as the 1-parameter logistic (1-PL) model dating back to over 50 years ago, up until the most recent complex, 4-parameter logistic (4-PL) model. Binary IRT models will be defined mathematically and the interpretation of each parameter will be provided. Next, all four binary IRT models will be employed on two sets of data: 1. Simulated data of N=500,000 subjects who responded to four dichotomous items and 2. A pilot analysis of real-world data collected from a sample of approximately 770 subjects who responded to four self-report dichotomous items pertaining to emotional consequences to alcohol use. Real-world data were based on responses collected on items administered to subjects as part of a scale-development study (NIDA Grant No. R44 DA023322). IRT analyses conducted on both the simulated data and analyses of real-world pilot will provide a clear demonstration of how to construct, evaluate, and compare binary IRT measurement models. All analyses will be performed using the new IRT procedure in SAS 9.4. SAS code to generate simulated data and analyses will be available upon request to allow for replication of results.Keywords: instrument development, item response theory, latent trait theory, psychometrics
Procedia PDF Downloads 358450 Challenges Faced in Hospitality and Tourism Education: Rural Versus Urban Universities
Authors: Adelaide Rethabile Motshabi Pitso-Mbili
Abstract:
The disparity between universities in rural and urban areas of South Africa is still an ongoing issue. There are a lot of variations in these universities, such as the performance of the students and the lecturers, which is viewed as a worrying discrepancy related to knowledge gaps or educational inequality. According to research, rural students routinely perform worse than urban students in sub-Saharan Africa, and the disparity is wide when compared to the global average. This may be a result of the various challenges that universities in rural and urban areas face. Hence, the aim of this study was to compare the challenges faced by rural and urban universities, especially in hospitality and tourism programs, and recommend possible solutions. This study used a qualitative methodology and included focus groups and in-depth interviews. Eight focus groups of final-year students in hospitality and tourism programs from four institutions and four department heads of those programs participated in in-depth interviews. Additionally, the study was motivated by the teacher collaboration theory, which proposes that colleagues can help one another for the benefit of students and the institution. It was revealed that rural universities face more challenges than urban universities when it comes to hospitality and tourism education. The results of the interviews showed that universities in rural areas have a high staff turnover rate and offer fewer courses due to a lack of resources, such as the infrastructure, staff, equipment, and materials needed to give students hands-on training on the campus and in various hospitality and tourism programs. Urban universities, on the other hand, provide a variety of courses in the hospitality and tourism areas, and while resources are seldom an issue, they must deal with classes that have large enrolments and insufficient funding to support them all. Additionally, students in remote locations noted that having a lack of water and electricity makes it difficult for them to perform practical lessons. It is recommended that universities work together to collaborate or develop partnerships to help one another overcome obstacles and that universities in rural areas visit those in urban areas to observe how things are done there and to determine where they can improve themselves. The significance of the study is that it will truly bring rural and urban educational processes and practices into greater alignment of standards, benefits, and achievements; this will also help retain staff members within the rural area universities. The present study contributes to the literature by increasing the accumulation of knowledge on research topics, challenges, trends and innovation in hospitality and tourism education and setting forth an agenda for future research. The current study adds to the body of literature by expanding the accumulation of knowledge on research topics that contribute to trends and innovations in hospitality and tourism education and by laying out a plan for future research.Keywords: hospitality and tourism education, rural and urban universities, collaboration, teacher and student performance, educational inequality
Procedia PDF Downloads 63449 Field Performance of Cement Treated Bases as a Reflective Crack Mitigation Technique for Flexible Pavements
Authors: Mohammad R. Bhuyan, Mohammad J. Khattak
Abstract:
Deterioration of flexible pavements due to crack reflection from its soil-cement base layer is a major concern around the globe. The service life of flexible pavement diminishes significantly because of the reflective cracks. Highway agencies are struggling for decades to prevent or mitigate these cracks in order to increase pavement service lives. The root cause of reflective cracks is the shrinkage crack which occurs in the soil-cement bases during the cement hydration process. The primary factor that causes the shrinkage is the cement content of the soil-cement mixture. With the increase of cement content, the soil-cement base gains strength and durability, which is necessary to withstand the traffic loads. But at the same time, higher cement content creates more shrinkage resulting in more reflective cracks in pavements. Historically, various states of USA have used the soil-cement bases for constructing flexile pavements. State of Louisiana (USA) had been using 8 to 10 percent of cement content to manufacture the soil-cement bases. Such traditional soil-cement bases yield 2.0 MPa (300 psi) 7-day compressive strength and are termed as cement stabilized design (CSD). As these CSD bases generate significant reflective cracks, another design of soil-cement base has been utilized by adding 4 to 6 percent of cement content called cement treated design (CTD), which yields 1.0 MPa (150 psi) 7-day compressive strength. The reduction of cement content in the CTD base is expected to minimize shrinkage cracks thus increasing pavement service lives. Hence, this research study evaluates the long-term field performance of CTD bases with respect to CSD bases used in flexible pavements. Pavement Management System of the state of Louisiana was utilized to select flexible pavement projects with CSD and CTD bases that had good historical record and time-series distress performance data. It should be noted that the state collects roughness and distress data for 1/10th mile section every 2-year period. In total, 120 CSD and CTD projects were analyzed in this research, where more than 145 miles (CTD) and 175 miles (CSD) of roadways data were accepted for performance evaluation and benefit-cost analyses. Here, the service life extension and area based on distress performance were considered as benefits. It was found that CTD bases increased 1 to 5 years of pavement service lives based on transverse cracking as compared to CSD bases. On the other hand, the service lives based on longitudinal and alligator cracking, rutting and roughness index remain the same. Hence, CTD bases provide some service life extension (2.6 years, on average) to the controlling distress; transverse cracking, but it was inexpensive due to its lesser cement content. Consequently, CTD bases become 20% more cost-effective than the traditional CSD bases, when both bases were compared by net benefit-cost ratio obtained from all distress types.Keywords: cement treated base, cement stabilized base, reflective cracking , service life, flexible pavement
Procedia PDF Downloads 169448 Integrating High-Performance Transport Modes into Transport Networks: A Multidimensional Impact Analysis
Authors: Sarah Pfoser, Lisa-Maria Putz, Thomas Berger
Abstract:
In the EU, the transport sector accounts for roughly one fourth of the total greenhouse gas emissions. In fact, the transport sector is one of the main contributors of greenhouse gas emissions. Climate protection targets aim to reduce the negative effects of greenhouse gas emissions (e.g. climate change, global warming) worldwide. Achieving a modal shift to foster environmentally friendly modes of transport such as rail and inland waterways is an important strategy to fulfill the climate protection targets. The present paper goes beyond these conventional transport modes and reflects upon currently emerging high-performance transport modes that yield the potential of complementing future transport systems in an efficient way. It will be defined which properties describe high-performance transport modes, which types of technology are included and what is their potential to contribute to a sustainable future transport network. The first step of this paper is to compile state-of-the-art information about high-performance transport modes to find out which technologies are currently emerging. A multidimensional impact analysis will be conducted afterwards to evaluate which of the technologies is most promising. This analysis will be performed from a spatial, social, economic and environmental perspective. Frequently used instruments such as cost-benefit analysis and SWOT analysis will be applied for the multidimensional assessment. The estimations for the analysis will be derived based on desktop research and discussions in an interdisciplinary team of researchers. For the purpose of this work, high-performance transport modes are characterized as transport modes with very fast and very high throughput connections that could act as efficient extension to the existing transport network. The recently proposed hyperloop system represents a potential high-performance transport mode which might be an innovative supplement for the current transport networks. The idea of hyperloops is that persons and freight are shipped in a tube at more than airline speed. Another innovative technology consists in drones for freight transport. Amazon already tests drones for their parcel shipments, they aim for delivery times of 30 minutes. Drones can, therefore, be considered as high-performance transport modes as well. The Trans-European Transport Networks program (TEN-T) addresses the expansion of transport grids in Europe and also includes high speed rail connections to better connect important European cities. These services should increase competitiveness of rail and are intended to replace aviation, which is known to be a polluting transport mode. In this sense, the integration of high-performance transport modes as described above facilitates the objectives of the TEN-T program. The results of the multidimensional impact analysis will reveal potential future effects of the integration of high-performance modes into transport networks. Building on that, a recommendation on the following (research) steps can be given which are necessary to ensure the most efficient implementation and integration processes.Keywords: drones, future transport networks, high performance transport modes, hyperloops, impact analysis
Procedia PDF Downloads 333