Search results for: optimum%20energy%20systems
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1766

Search results for: optimum%20energy%20systems

56 Design of a Human-in-the-Loop Aircraft Taxiing Optimisation System Using Autonomous Tow Trucks

Authors: Stefano Zaninotto, Geoffrey Farrugia, Johan Debattista, Jason Gauci

Abstract:

The need to reduce fuel and noise during taxi operations in the airports with a scenario of constantly increasing air traffic has resulted in an effort by the aerospace industry to move towards electric taxiing. In fact, this is one of the problems that is currently being addressed by SESAR JU and two main solutions are being proposed. With the first solution, electric motors are installed in the main (or nose) landing gear of the aircraft. With the second solution, manned or unmanned electric tow trucks are used to tow aircraft from the gate to the runway (or vice-versa). The presence of the tow trucks results in an increase in vehicle traffic inside the airport. Therefore, it is important to design the system in a way that the workload of Air Traffic Control (ATC) is not increased and the system assists ATC in managing all ground operations. The aim of this work is to develop an electric taxiing system, based on the use of autonomous tow trucks, which optimizes aircraft ground operations while keeping ATC in the loop. This system will consist of two components: an optimization tool and a Graphical User Interface (GUI). The optimization tool will be responsible for determining the optimal path for arriving and departing aircraft; allocating a tow truck to each taxiing aircraft; detecting conflicts between aircraft and/or tow trucks; and proposing solutions to resolve any conflicts. There are two main optimization strategies proposed in the literature. With centralized optimization, a central authority coordinates and makes the decision for all ground movements, in order to find a global optimum. With the second strategy, called decentralized optimization or multi-agent system, the decision authority is distributed among several agents. These agents could be the aircraft, the tow trucks, and taxiway or runway intersections. This approach finds local optima; however, it scales better with the number of ground movements and is more robust to external disturbances (such as taxi delays or unscheduled events). The strategy proposed in this work is a hybrid system combining aspects of these two approaches. The GUI will provide information on the movement and status of each aircraft and tow truck, and alert ATC about any impending conflicts. It will also enable ATC to give taxi clearances and to modify the routes proposed by the system. The complete system will be tested via computer simulation of various taxi scenarios at multiple airports, including Malta International Airport, a major international airport, and a fictitious airport. These tests will involve actual Air Traffic Controllers in order to evaluate the GUI and assess the impact of the system on ATC workload and situation awareness. It is expected that the proposed system will increase the efficiency of taxi operations while reducing their environmental impact. Furthermore, it is envisaged that the system will facilitate various controller tasks and improve ATC situation awareness.

Keywords: air traffic control, electric taxiing, autonomous tow trucks, graphical user interface, ground operations, multi-agent, route optimization

Procedia PDF Downloads 101
55 Dietary Intake and Nutritional Inadequacy Leading to Malnutrition among Children Residing in Shelter Home, Rural Tamil Nadu, India

Authors: Niraimathi Kesavan, Sangeeta Sharma, Deepa Jagan, Sridhar Sukumar, Mohan Ramachandran, Vidhubala Elangovan

Abstract:

Background: Childhood is a dynamic period for growth and development. Optimum nutrition during this period forms a strong foundation for growth, development, resistance to infections, long-term good health, cognition, educational achievements, and work productivity in a later phase of life. Underprivileged children living in a resource constraint settings like shelter homes are at high risk of malnutrition due to poor quality diet and nutritional inadequacy. In low-income countries, underprivileged children are vulnerable to being deprived of nutritious food, which stands as a major challenge in the health sector. The present aims to assess the dietary intake, nutritional status, and nutritional inadequacy and their association with malnutrition among children residing in shelter homes in rural Tamil Nadu. Methods: The study was a descriptive survey conducted among all the children aged between 8-18 years residing in two selected shelter homes (Anbu illam, a home for female children, and Amaidhi illam, a home for male children), rural Tirunelveli, Tamil Nadu, India. A total of 57 children were recruited, including 18 boys and 39 girls, for the study. Dietary intake was measured using seven days 24 hours recall. The average nutrient intake was considered for further analysis. Results: Of the 57 children, about 60% (n=35) were undernutrition. The mean daily energy intake was 1298 (SD 180) kcal for boys and 952 (SD155) kcal for girls. The total calorie intake was 55-60% below the estimated average requirement (EAR) for adolescent boys and girls in the age group 13-15 years and 16-18 years. Carbohydrates were the major source of energy (boys 53% and girls 51%), followed by fat (boys 31.5% and girls 34.5%) and protein (boys 14% and girls 12.9%). Dairy intake (<200ml/day) was less than the recommendation (500ml/day). Micro-nutrient-rich foods such as fruits, vegetables, and green leafy vegetables in the diet were <200g/day, which was far less than the recommended dietary guidelines of 400g- 600g/day for the age group of 7-18 years. Nearly 26% of girls reported experiencing menstrual problems. The majority (76.9%) of the children exhibited nutrient deficiency-related signs and symptoms. Conclusion: The total energy, minerals, and micro-nutrient intake were inadequate and below the Recommended Dietary Allowance for children and adolescents. The diet predominantly consists of refined cereals, rice, semolina, and vermicelli. Consumption of whole grains, milk, fruits, vegetables, and leafy vegetables was far below the recommended dietary guidelines. Dietary inadequacies among these children pose a serious concern for their overall health status and its consequences in the later phase of life.

Keywords: adolescents, children, dietary intake, malnutrition, nutritional inadequacy, shelter home

Procedia PDF Downloads 55
54 Influence of Interpersonal Communication on Family Planning Practices among Rural Women in South East Nigeria

Authors: Chinwe Okpoko, Vivian Atasie

Abstract:

One of the leading causes of death amongst women of child-bearing age in southeast Nigeria is pregnancy. Women in the reproductive age group die at a higher rate than men of the same age bracket. Furthermore, most maternal deaths occur among poor women who live in rural communities, and who generally fall within the low socio-economic group in society. Failure of policy makers and the media to create the strategic awareness and communication that conform with the sensibilities of this group account, in part, for the persistence of this malaise. Family planning (FP) is an essential component of safe motherhood, which is designed to ensure that women receive high-quality care to achieve an optimum level of health of mother and infant. The aim is to control the number of children a woman can give birth to and prevent maternal and child mortality and morbidity. This is what sustainable development goal (SDG) health target of World Health Organization (WHO) also strives to achieve. FP programmes reduce exposure to the risks of child-bearing. Indeed, most maternal deaths in the developing world can be prevented by fully investing simultaneously in FP and maternal and new-born care. Given the intrinsic value of communication in health care delivery, it is vital to adopt the most efficacious means of awareness creation and communication amongst rural women in FP. In a country where over 50% of her population resides in rural areas with attendant low-level profile standard of living, the need to communicate health information like FP through indigenous channels becomes pertinent. Interpersonal communication amongst family, friends, religious groups and other associations, is an efficacious means of communicating social issues in rural Africa. Communication in informal settings identifies with the values and social context of the recipients. This study therefore sought to determine the place of interpersonal communication on the knowledge of rural women on FP and how it influences uptake of FP. Descriptive survey design was used in the study, with interviewer administered questionnaire constituting the instrument for data collection. The questionnaire was administered on 385 women from rural communities in southeast Nigeria. The results show that majority (58.5%) of the respondents agreed that interpersonal communication helps women understand how to plan their family size. Many rural women (82%) prefer the short term natural method to the more effective modern contraceptive methods (38.1%). Husbands’ approval of FP, as indicated in the Mean response of 2.56, is a major factor that accounts for the adoption of FP messages among rural women. Socio-demographic data also reveal that educational attainment and/or exposure influenced women’s acceptance or otherwise of FP messages. The study, therefore, recommends amongst others, the targeting of husbands in subsequent FP communication interventions, since they play major role on contraceptive usage.

Keywords: family planning, interpersonal communication, interpersonal interaction, traditional communication

Procedia PDF Downloads 100
53 Natural Fibers Design Attributes

Authors: Brayan S. Pabón, R. Ricardo Moreno, Edith Gonzalez

Abstract:

Inside the wide Colombian natural fiber set is the banana stem leaf, known as Calceta de Plátano, which is a material present in several regions of the country and is a fiber extracted from the pseudo stem of the banana plant (Musa paradisiaca) as a regular maintenance process. Colombia had a production of 2.8 million tons in 2007 and 2008 corresponding to 8.2% of the international production, number that is growing. This material was selected to be studied because it is not being used by farmers due to it being perceived as a waste from the banana harvest and a propagation pest agent inside the planting. In addition, the Calceta does not have industrial applications in Colombia since there is not enough concrete knowledge that informs us about the properties of the material and the possible applications it could have. Based on this situation the industrial design is used as a link between the properties of the material and the need to transform it into industrial products for the market. Therefore, the project identifies potential design attributes that the banana stem leaf can have for product development. The methodology was divided into 2 main chapters: Methodology for the material recognition: -Data Collection, inquiring the craftsmen experience and bibliography. -Knowledge in practice, with controlled experiments and validation tests. -Creation of design attributes and material profile according to the knowledge developed. Moreover, the Design methodology: -Application fields selection, exploring the use of the attributes and the relation with product functions. -Evaluating the possible fields and selection of the optimum application. -Design Process with sketching, ideation, and product development. Different protocols were elaborated to qualitatively determine some material properties of the Calceta, and if they could be designated as design attributes. Once defined, performed and analyzed the validation protocols, 25 design attributes were identified and classified into 4 attribute categories (Environmental, Functional, Aesthetics and Technical) forming the material profile. Then, 15 application fields were defined based on the relation between functions of product and the use of the Calceta attributes. Those fields were evaluated to measure how much are being used the functional attributes. After fields evaluation, a final field was defined , influenced by traditional use of the fiber for packing food. As final result, two products were designed for this application field. The first one is the Multiple Container, which works to contain small or large-thin pieces of food, like potatoes chips or small sausages; it allows the consumption of food with sauces or dressings. The second is the Chorizo container, specifically designed for this food due to the long shape and the consumption mode. Natural fiber research allows the generation of a solider and a more complete knowledge about natural fibers. In addition, the research is a way to strengthen the identity through the investigation of the proper and autochthonous, allowing the use of national resources in a sustainable and creative way. Using divergent thinking and the design as a tool, this investigation can achieve advances in the natural fiber handling.

Keywords: banana stem leaf, Calceta de Plátano, design attributes, natural fibers, product design

Procedia PDF Downloads 222
52 Efficient Computer-Aided Design-Based Multilevel Optimization of the LS89

Authors: A. Chatel, I. S. Torreguitart, T. Verstraete

Abstract:

The paper deals with a single point optimization of the LS89 turbine using an adjoint optimization and defining the design variables within a CAD system. The advantage of including the CAD model in the design system is that higher level constraints can be imposed on the shape, allowing the optimized model or component to be manufactured. However, CAD-based approaches restrict the design space compared to node-based approaches where every node is free to move. In order to preserve a rich design space, we develop a methodology to refine the CAD model during the optimization and to create the best parameterization to use at each time. This study presents a methodology to progressively refine the design space, which combines parametric effectiveness with a differential evolutionary algorithm in order to create an optimal parameterization. In this manuscript, we show that by doing the parameterization at the CAD level, we can impose higher level constraints on the shape, such as the axial chord length, the trailing edge radius and G2 geometric continuity between the suction side and pressure side at the leading edge. Additionally, the adjoint sensitivities are filtered out and only smooth shapes are produced during the optimization process. The use of algorithmic differentiation for the CAD kernel and grid generator allows computing the grid sensitivities to machine accuracy and avoid the limited arithmetic precision and the truncation error of finite differences. Then, the parametric effectiveness is computed to rate the ability of a set of CAD design parameters to produce the design shape change dictated by the adjoint sensitivities. During the optimization process, the design space is progressively enlarged using the knot insertion algorithm which allows introducing new control points whilst preserving the initial shape. The position of the inserted knots is generally assumed. However, this assumption can hinder the creation of better parameterizations that would allow producing more localized shape changes where the adjoint sensitivities dictate. To address this, we propose using a differential evolutionary algorithm to maximize the parametric effectiveness by optimizing the location of the inserted knots. This allows the optimizer to gradually explore larger design spaces and to use an optimal CAD-based parameterization during the course of the optimization. The method is tested on the LS89 turbine cascade and large aerodynamic improvements in the entropy generation are achieved whilst keeping the exit flow angle fixed. The trailing edge and axial chord length, which are kept fixed as manufacturing constraints. The optimization results show that the multilevel optimizations were more efficient than the single level optimization, even though they used the same number of design variables at the end of the multilevel optimizations. Furthermore, the multilevel optimization where the parameterization is created using the optimal knot positions results in a more efficient strategy to reach a better optimum than the multilevel optimization where the position of the knots is arbitrarily assumed.

Keywords: adjoint, CAD, knots, multilevel, optimization, parametric effectiveness

Procedia PDF Downloads 86
51 Production and Characterization of Biochars from Torrefaction of Biomass

Authors: Serdar Yaman, Hanzade Haykiri-Acma

Abstract:

Biomass is a CO₂-neutral fuel that is renewable and sustainable along with having very huge global potential. Efficient use of biomass in power generation and production of biomass-based biofuels can mitigate the greenhouse gasses (GHG) and reduce dependency on fossil fuels. There are also other beneficial effects of biomass energy use such as employment creation and pollutant reduction. However, most of the biomass materials are not capable of competing with fossil fuels in terms of energy content. High moisture content and high volatile matter yields of biomass make it low calorific fuel, and it is very significant concern over fossil fuels. Besides, the density of biomass is generally low, and it brings difficulty in transportation and storage. These negative aspects of biomass can be overcome by thermal pretreatments that upgrade the fuel property of biomass. That is, torrefaction is such a thermal process in which biomass is heated up to 300ºC under non-oxidizing conditions to avoid burning of the material. The treated biomass is called as biochar that has considerably lower contents of moisture, volatile matter, and oxygen compared to the parent biomass. Accordingly, carbon content and the calorific value of biochar increase to the level which is comparable with that of coal. Moreover, hydrophilic nature of untreated biomass that leads decay in the structure is mostly eliminated, and the surface properties of biochar turn into hydrophobic character upon torrefaction. In order to investigate the effectiveness of torrefaction process on biomass properties, several biomass species such as olive milling residue (OMR), Rhododendron (small shrubby tree with bell-shaped flowers), and ash tree (timber tree) were chosen. The fuel properties of these biomasses were analyzed through proximate and ultimate analyses as well as higher heating value (HHV) determination. For this, samples were first chopped and ground to a particle size lower than 250 µm. Then, samples were subjected to torrefaction in a horizontal tube furnace by heating from ambient up to temperatures of 200, 250, and 300ºC at a heating rate of 10ºC/min. The biochars obtained from this process were also tested by the methods applied to the parent biomass species. Improvement in the fuel properties was interpreted. That is, increasing torrefaction temperature led to regular increases in the HHV in OMR, and the highest HHV (6065 kcal/kg) was gained at 300ºC. Whereas, torrefaction at 250ºC was seen optimum for Rhododendron and ash tree since torrefaction at 300ºC had a detrimental effect on HHV. On the other hand, the increase in carbon contents and reduction in oxygen contents were determined. Burning characteristics of the biochars were also studied using thermal analysis technique. For this purpose, TA Instruments SDT Q600 model thermal analyzer was used and the thermogravimetric analysis (TGA), derivative thermogravimetry (DTG), differential scanning calorimetry (DSC), and differential thermal analysis (DTA) curves were compared and interpreted. It was concluded that torrefaction is an efficient method to upgrade the fuel properties of biomass and the biochars from which have superior characteristics compared to the parent biomasses.

Keywords: biochar, biomass, fuel upgrade, torrefaction

Procedia PDF Downloads 337
50 Numerical Investigation of Thermal Energy Storage Panel Using Nanoparticle Enhanced Phase Change Material for Micro-Satellites

Authors: Jelvin Tom Sebastian, Vinod Yeldho Baby

Abstract:

In space, electronic devices are constantly attacked with radiation, which causes certain parts to fail or behave in unpredictable ways. To advance the thermal controllability for microsatellites, we need a new approach and thermal control system that is smaller than that on conventional satellites and that demand no electric power. Heat exchange inside the microsatellites is not that easy as conventional satellites due to the smaller size. With slight mass gain and no electric power, accommodating heat using phase change materials (PCMs) is a strong candidate for solving micro satellites' thermal difficulty. In other words, PCMs can absorb or produce heat in the form of latent heat, changing their phase and minimalizing the temperature fluctuation around the phase change point. The main restriction for these systems is thermal conductivity weakness of common PCMs. As PCM is having low thermal conductivity, it increases the melting and solidification time, which is not suitable for specific application like electronic cooling. In order to increase the thermal conductivity nanoparticles are introduced. Adding the nanoparticles in base PCM increases the thermal conductivity. Increase in weight concentration increases the thermal conductivity. This paper numerically investigates the thermal energy storage panel with nanoparticle enhanced phase change material. Silver nanostructure have increased the thermal properties of the base PCM, eicosane. Different weight concentration (1, 2, 3.5, 5, 6.5, 8, 10%) of silver enhanced phase change material was considered. Both steady state and transient analysis was performed to compare the characteristics of nanoparticle enhanced phase material at different heat loads. Results showed that in steady state, the temperature near the front panel reduced and temperature on NePCM panel increased as the weight concentration increased. With the increase in thermal conductivity more heat was absorbed into the NePCM panel. In transient analysis, it was found that the effect of nanoparticle concentration on maximum temperature of the system was reduced as the melting point of the material reduced with increase in weight concentration. But for the heat load of maximum 20W, the model with NePCM did not attain the melting point temperature. Therefore it showed that the model with NePCM is capable of holding more heat load. In order to study the heat load capacity double the load is given, maximum of 40W was given as first half of the cycle and the other is given constant OW. Higher temperature was obtained comparing the other heat load. The panel maintained a constant temperature for a long duration according to the NePCM melting point. In both the analysis, the uniformity of temperature of the TESP was shown. Using Ag-NePCM it allows maintaining a constant peak temperature near the melting point. Therefore, by altering the weight concentration of the Ag-NePCM it is possible to create an optimum operating temperature required for the effective working of the electronics components.

Keywords: carbon-fiber-reinforced polymer, micro/nano-satellite, nanoparticle phase change material, thermal energy storage

Procedia PDF Downloads 183
49 The Effect of Online Analyzer Malfunction on the Performance of Sulfur Recovery Unit and Providing a Temporary Solution to Reduce the Emission Rate

Authors: Hamid Reza Mahdipoor, Mehdi Bahrami, Mohammad Bodaghi, Seyed Ali Akbar Mansoori

Abstract:

Nowadays, with stricter limitations to reduce emissions, considerable penalties are imposed if pollution limits are exceeded. Therefore, refineries, along with focusing on improving the quality of their products, are also focused on producing products with the least environmental impact. The duty of the sulfur recovery unit (SRU) is to convert H₂S gas coming from the upstream units to elemental sulfur and minimize the burning of sulfur compounds to SO₂. The Claus process is a common process for converting H₂S to sulfur, including a reaction furnace followed by catalytic reactors and sulfur condensers. In addition to a Claus section, SRUs usually consist of a tail gas treatment (TGT) section to decrease the concentration of SO₂ in the flue gas below the emission limits. To operate an SRU properly, the flow rate of combustion air to the reaction furnace must be adjusted so that the Claus reaction is performed according to stoichiometry. Accurate control of the air demand leads to an optimum recovery of sulfur during the flow and composition fluctuations in the acid gas feed. Therefore, the major control system in the SRU is the air demand control loop, which includes a feed-forward control system based on predetermined feed flow rates and a feed-back control system based on the signal from the tail gas online analyzer. The use of online analyzers requires compliance with the installation and operation instructions. Unfortunately, most of these analyzers in Iran are out of service for different reasons, like the low importance of environmental issues and a lack of access to after-sales services due to sanctions. In this paper, an SRU in Iran was simulated and calibrated using industrial experimental data. Afterward, the effect of the malfunction of the online analyzer on the performance of SRU was investigated using the calibrated simulation. The results showed that an increase in the SO₂ concentration in the tail gas led to an increase in the temperature of the reduction reactor in the TGT section. This increase in temperature caused the failure of TGT and increased the concentration of SO₂ from 750 ppm to 35,000 ppm. In addition, the lack of a control system for the adjustment of the combustion air caused further increases in SO₂ emissions. In some processes, the major variable cannot be controlled directly due to difficulty in measurement or a long delay in the sampling system. In these cases, a secondary variable, which can be measured more easily, is considered to be controlled. With the correct selection of this variable, the main variable is also controlled along with the secondary variable. This strategy for controlling a process system is referred to as inferential control" and is considered in this paper. Therefore, a sensitivity analysis was performed to investigate the sensitivity of other measurable parameters to input disturbances. The results revealed that the output temperature of the first Claus reactor could be used for inferential control of the combustion air. Applying this method to the operation led to maximizing the sulfur recovery in the Claus section.

Keywords: sulfur recovery, online analyzer, inferential control, SO₂ emission

Procedia PDF Downloads 33
48 Assessment of On-Site Solar and Wind Energy at a Manufacturing Facility in Ireland

Authors: A. Sgobba, C. Meskell

Abstract:

The feasibility of on-site electricity production from solar and wind and the resulting load management for a specific manufacturing plant in Ireland are assessed. The industry sector accounts directly and indirectly for a high percentage of electricity consumption and global greenhouse gas emissions; therefore, it will play a key role in emission reduction and control. Manufacturing plants, in particular, are often located in non-residential areas since they require open spaces for production machinery, parking facilities for the employees, appropriate routes for supply and delivery, special connections to the national grid and other environmental impacts. Since they have larger spaces compared to commercial sites in urban areas, they represent an appropriate case study for evaluating the technical and economic viability of energy system integration with low power density technologies, such as solar and wind, for on-site electricity generation. The available open space surrounding the analysed manufacturing plant can be efficiently used to produce a discrete quantity of energy, instantaneously and locally consumed. Therefore, transmission and distribution losses can be reduced. The usage of storage is not required due to the high and almost constant electricity consumption profile. The energy load of the plant is identified through the analysis of gas and electricity consumption, both internally monitored and reported on the bills. These data are not often recorded and available to third parties since manufacturing companies usually keep track only of the overall energy expenditures. The solar potential is modelled for a period of 21 years based on global horizontal irradiation data; the hourly direct and diffuse radiation and the energy produced by the system at the optimum pitch angle are calculated. The model is validated using PVWatts and SAM tools. Wind speed data are available for the same period within one-hour step at a height of 10m. Since the hub of a typical wind turbine reaches a higher altitude, complementary data for a different location at 50m have been compared, and a model for the estimate of wind speed at the required height in the right location is defined. Weibull Statistical Distribution is used to evaluate the wind energy potential of the site. The results show that solar and wind energy are, as expected, generally decoupled. Based on the real case study, the percentage of load covered every hour by on-site generation (Level of Autonomy LA) and the resulting electricity bought from the grid (Expected Energy Not Supplied EENS) are calculated. The economic viability of the project is assessed through Net Present Value, and the influence the main technical and economic parameters have on NPV is presented. Since the results show that the analysed renewable sources can not provide enough electricity, the integration with a cogeneration technology is studied. Finally, the benefit to energy system integration of wind, solar and a cogeneration technology is evaluated and discussed.

Keywords: demand, energy system integration, load, manufacturing, national grid, renewable energy sources

Procedia PDF Downloads 101
47 Surface Acoustic Waves Nebulisation of Liposomes Manufactured in situ for Pulmonary Drug Delivery

Authors: X. King, E. Nazarzadeh, J. Reboud, J. Cooper

Abstract:

Pulmonary diseases, such as asthma, are generally treated by the inhalation of aerosols that has the advantage of reducing the off-target (e.g., toxicity) effects associated with systemic delivery in blood. Effective respiratory drug delivery requires a droplet size distribution between 1 and 5 µm. Inhalation of aerosols with wide droplet size distribution, out of this range, results in deposition of drug in not-targeted area of the respiratory tract, introducing undesired side effects on the patient. In order to solely deliver the drug in the lower branches of the lungs and release it in a targeted manner, a control mechanism to produce the aerosolized droplets is required. To regulate the drug release and to facilitate the uptake from cells, drugs are often encapsulated into protective liposomes. However, a multistep process is required for their formation, often performed at the formulation step, therefore limiting the range of available drugs or their shelf life. Using surface acoustic waves (SAWs), a pulmonary drug delivery platform was produced, which enabled the formation of defined size aerosols and the formation of liposomes in situ. SAWs are mechanical waves, propagating along the surface of a piezoelectric substrate. They were generated using an interdigital transducer on lithium niobate with an excitation frequency of 9.6 MHz at a power of 1W. Disposable silicon superstrates were etched using photolithography and dry etch processes to create an array of cylindrical through-holes with different diameters and pitches. Superstrates were coupled with the SAW substrate through water-based gel. As the SAW propagates on the superstrate, it enables nebulisation of a lipid solution deposited onto it. The cylindrical cavities restricted the formation of large drops in the aerosol, while at the same time unilamellar liposomes were created. SAW formed liposomes showed a higher monodispersity compared to the control sample, as well as displayed, a faster production rate. To test the aerosol’s size, dynamic light scattering and laser diffraction methods were used, both showing the size control of the aerosolised particles. The use of silicon superstate with cavity size of 100-200 µm, produced an aerosol with a mean droplet size within the optimum range for pulmonary drug delivery, containing the liposomes in which the medicine could be loaded. Additionally, analysis of liposomes with Cryo-TEM showed formation of vesicles with narrow size distribution between 80-100 nm and optimal morphology in order to be used for drug delivery. Encapsulation of nucleic acids in liposomes through the developed SAW platform was also investigated. In vitro delivery of siRNA and DNA Luciferase were achieved using A549 cell line, lung carcinoma from human. In conclusion, SAW pulmonary drug delivery platform was engineered, in order to combine multiple time consuming steps (formation of liposomes, drug loading, nebulisation) into a unique platform with the aim of specifically delivering the medicament in a targeted area, reducing the drug’s side effects.

Keywords: acoustics, drug delivery, liposomes, surface acoustic waves

Procedia PDF Downloads 86
46 Green Synthesis (Using Environment Friendly Bacteria) of Silver-Nanoparticles and Their Application as Drug Delivery Agents

Authors: Sutapa Mondal Roy, Suban K. Sahoo

Abstract:

The primary aim of this work is to synthesis silver nanoparticles (AgNPs) through environmentally benign routes to avoid any chemical toxicity related undesired side effects. The nanoparticles were stabilized with drug ciprofloxacin (Cp) and were studied for their effectiveness as drug delivery agent. Targeted drug delivery improves the therapeutic potential of drugs at the diseased site as well as lowers the overall dose and undesired side effects. The small size of nanoparticles greatly facilitates the transport of active agents (drugs) across biological membranes and allows them to pass through the smallest capillaries in the body that are 5-6 μm in diameter, and can minimize possible undesired side effects. AgNPs are non-toxic, inert, stable, and has a high binding capacity and thus can be considered as biomaterials. AgNPs were synthesized from the nutrient broth supernatant after the culture of environment-friendly bacteria Bacillus subtilis. The AgNPs were found to show the surface plasmon resonance (SPR) band at 425 nm. The Cp capped Ag nanoparticles formation was complete within 30 minutes, which was confirmed from absorbance spectroscopy. Physico-chemical nature of the AgNPs-Cp system was confirmed by Dynamic Light Scattering (DLS), Transmission Electron Microscopy (TEM) etc. The AgNPs-Cp system size was found to be in the range of 30-40 nm. To monitor the kinetics of drug release from the surface of nanoparticles, the release of Cp was carried out by careful dialysis keeping AgNPs-Cp system inside the dialysis bag at pH 7.4 over time. The drug release was almost complete after 30 hrs. During the drug delivery process, to understand the AgNPs-Cp system in a better way, the sincere theoretical investigation is been performed employing Density Functional Theory. Electronic charge transfer, electron density, binding energy as well as thermodynamic properties like enthalpy, entropy, Gibbs free energy etc. has been predicted. The electronic and thermodynamic properties, governed by the AgNPs-Cp interactions, indicate that the formation of AgNPs-Cp system is exothermic i.e. thermodynamically favorable process. The binding energy and charge transfer analysis implies the optimum stability of the AgNPs-Cp system. Thus, the synthesized Cp-Ag nanoparticles can be effectively used for biological purposes due to its environmentally benign routes of synthesis procedures, which is clean, biocompatible, non-toxic, safe, cost-effective, sustainable and eco-friendly. The Cp-AgNPs as biomaterials can be successfully used for drug delivery procedures due to slow release of drug from nanoparticles over a considerable period of time. The kinetics of the drug release show that this drug-nanoparticle assembly can be effectively used as potential tools for therapeutic applications. The ease of synthetic procedure, lack of possible chemical toxicity and their biological activity along with excellent application as drug delivery agent will open up vista of using nanoparticles as effective and successful drug delivery agent to be used in modern days.

Keywords: silver nanoparticles, ciprofloxacin, density functional theory, drug delivery

Procedia PDF Downloads 354
45 Conceptual Methods of Mitigating Matured Urban Tree Roots Surviving in Conflicts Growth within Built Environment: A Review

Authors: Mohd Suhaizan Shamsuddin

Abstract:

Urbanization exacerbates the environment quality and pressures of matured urban trees' growth and development in changing environment. The growth of struggled matured urban tree-roots by spreading within the existences of infrastructures, resulting in large damage to the structured and declined growth. Many physiological growths declined or damages by the present and installations of infrastructures within and nearby root zone. Afford to remain both matured urban tree and infrastructures as a service provider causes damage and death, respectively. Inasmuch, spending more expenditure on fixing both or removing matured urban trees as risky to the future environment as the mitigation methods to reduce the problems are unconcerned. This paper aims to explain mitigation method practices of reducing the encountered problems of matured urban tree-roots settling and infrastructures while modified urban soil to sustain at an optimum level. Three categories capturing encountered conflicts growth of matured urban tree-roots growth within and nearby infrastructures by mitigating the problems of limited soil spaces, poor soil structures and soil space barrier installations and maintenance. The limited soil space encountered many conflicts and identified six methods that mitigate the survival tree-roots, such as soil volume/mounding, soil replacement/amendment for the radial trench, soil spacing-root bridge, root tunneling, walkway/pavement rising/diverted, and suspended pavement. The limited soil spaces are mitigation affords of inadequate soil-roots and spreading root settling and modification of construction soil media since the barrier existed and installed in root trails or zones. This is the reason for enabling tree-roots spreading and finds adequate sources (nutrients, water uptake and oxygen), spaces and functioning to stability stand of root anchorage since the matured tree grows larger. The poor soil structures were identified as three methods to mitigate soil materials' problems, and fewer soil voids comprise skeletal soil, structural soil, and soil cell. Mitigation of poor soil structure is altering the existing and introducing new structures by modifying the quantities and materials ratio allowing more voids beneath for roots spreading by considering the above structure of foot and vehicle traffics functioning or load-bearing. The soil space barrier installations and maintenance recognized to sustain both infrastructures and tree-roots grown in limited spaces and its benefits, the root barrier installations and root pruning are recommended. In conclusion, these recommended methods attempt to mitigate the present problems encountered at a particular place and problems among tree-roots and infrastructures exist. The combined method is the best way to alleviates the conflicts since the recognized conflicts are between tree-roots and man-made while the urban soil is modified. These presenting methods are most considered to sustain the matured urban trees' lifespan growth in the urban environment.

Keywords: urban tree-roots, limited soil spaces, poor soil structures, soil space barrier and maintenance

Procedia PDF Downloads 163
44 Optimizing Stormwater Sampling Design for Estimation of Pollutant Loads

Authors: Raja Umer Sajjad, Chang Hee Lee

Abstract:

Stormwater runoff is the leading contributor to pollution of receiving waters. In response, an efficient stormwater monitoring program is required to quantify and eventually reduce stormwater pollution. The overall goals of stormwater monitoring programs primarily include the identification of high-risk dischargers and the development of total maximum daily loads (TMDLs). The challenge in developing better monitoring program is to reduce the variability in flux estimates due to sampling errors; however, the success of monitoring program mainly depends on the accuracy of the estimates. Apart from sampling errors, manpower and budgetary constraints also influence the quality of the estimates. This study attempted to develop optimum stormwater monitoring design considering both cost and the quality of the estimated pollutants flux. Three years stormwater monitoring data (2012 – 2014) from a mix land use located within Geumhak watershed South Korea was evaluated. The regional climate is humid and precipitation is usually well distributed through the year. The investigation of a large number of water quality parameters is time-consuming and resource intensive. In order to identify a suite of easy-to-measure parameters to act as a surrogate, Principal Component Analysis (PCA) was applied. Means, standard deviations, coefficient of variation (CV) and other simple statistics were performed using multivariate statistical analysis software SPSS 22.0. The implication of sampling time on monitoring results, number of samples required during the storm event and impact of seasonal first flush were also identified. Based on the observations derived from the PCA biplot and the correlation matrix, total suspended solids (TSS) was identified as a potential surrogate for turbidity, total phosphorus and for heavy metals like lead, chromium, and copper whereas, Chemical Oxygen Demand (COD) was identified as surrogate for organic matter. The CV among different monitored water quality parameters were found higher (ranged from 3.8 to 15.5). It suggests that use of grab sampling design to estimate the mass emission rates in the study area can lead to errors due to large variability. TSS discharge load calculation error was found only 2 % with two different sample size approaches; i.e. 17 samples per storm event and equally distributed 6 samples per storm event. Both seasonal first flush and event first flush phenomena for most water quality parameters were observed in the study area. Samples taken at the initial stage of storm event generally overestimate the mass emissions; however, it was found that collecting a grab sample after initial hour of storm event more closely approximates the mean concentration of the event. It was concluded that site and regional climate specific interventions can be made to optimize the stormwater monitoring program in order to make it more effective and economical.

Keywords: first flush, pollutant load, stormwater monitoring, surrogate parameters

Procedia PDF Downloads 211
43 Covariate-Adjusted Response-Adaptive Designs for Semi-Parametric Survival Responses

Authors: Ayon Mukherjee

Abstract:

Covariate-adjusted response-adaptive (CARA) designs use the available responses to skew the treatment allocation in a clinical trial in towards treatment found at an interim stage to be best for a given patient's covariate profile. Extensive research has been done on various aspects of CARA designs with the patient responses assumed to follow a parametric model. However, ranges of application for such designs are limited in real-life clinical trials where the responses infrequently fit a certain parametric form. On the other hand, robust estimates for the covariate-adjusted treatment effects are obtained from the parametric assumption. To balance these two requirements, designs are developed which are free from distributional assumptions about the survival responses, relying only on the assumption of proportional hazards for the two treatment arms. The proposed designs are developed by deriving two types of optimum allocation designs, and also by using a distribution function to link the past allocation, covariate and response histories to the present allocation. The optimal designs are based on biased coin procedures, with a bias towards the better treatment arm. These are the doubly-adaptive biased coin design (DBCD) and the efficient randomized adaptive design (ERADE). The treatment allocation proportions for these designs converge to the expected target values, which are functions of the Cox regression coefficients that are estimated sequentially. These expected target values are derived based on constrained optimization problems and are updated as information accrues with sequential arrival of patients. The design based on the link function is derived using the distribution function of a probit model whose parameters are adjusted based on the covariate profile of the incoming patient. To apply such designs, the treatment allocation probabilities are sequentially modified based on the treatment allocation history, response history, previous patients’ covariates and also the covariates of the incoming patient. Given these information, an expression is obtained for the conditional probability of a patient allocation to a treatment arm. Based on simulation studies, it is found that the ERADE is preferable to the DBCD when the main aim is to minimize the variance of the observed allocation proportion and to maximize the power of the Wald test for a treatment difference. However, the former procedure being discrete tends to be slower in converging towards the expected target allocation proportion. The link function based design achieves the highest skewness of patient allocation to the best treatment arm and thus ethically is the best design. Other comparative merits of the proposed designs have been highlighted and their preferred areas of application are discussed. It is concluded that the proposed CARA designs can be considered as suitable alternatives to the traditional balanced randomization designs in survival trials in terms of the power of the Wald test, provided that response data are available during the recruitment phase of the trial to enable adaptations to the designs. Moreover, the proposed designs enable more patients to get treated with the better treatment during the trial thus making the designs more ethically attractive to the patients. An existing clinical trial has been redesigned using these methods.

Keywords: censored response, Cox regression, efficiency, ethics, optimal allocation, power, variability

Procedia PDF Downloads 133
42 In Vitro Propagation of Vanilla Planifolia Using Nodal Explants and Varied Concentrations of Naphthaleneacetic acid (NAA) and 6-Benzylaminopurine (BAP).

Authors: Jessica Arthur, Duke Amegah, Kingsley Akenten Wiafe

Abstract:

Background: Vanilla planifolia is the only edible fruit of the orchid family (Orchidaceae) among the over 35,000 Orchidaceae species found worldwide. In Ghana, Vanilla was discovered in the wild, but it is underutilized for commercial production, most likely due to a lack of knowledge on the best NAA and BAP combinations for in vitro propagation to promote successfully regenerated plant acclimatization. The growing interest and global demand for elite Vanilla planifolia plants and natural vanilla flavour emphasize the need for an effective industrial-scale micropropagation protocol. Tissue culture systems are increasingly used to grow disease-free plants and reliable in vitro methods can also produce plantlets with typically modest proliferation rates. This study sought to develop an efficient protocol for in vitro propagation of vanilla using nodal explants by testing different concentrations of NAA and BAP, for the proliferation of the entire plant. Methods: Nodal explants with dormant axillary buds were obtained from year-old laboratory-grown Vanilla planifolia plants. MS media was prepared with a nutrient stock solution (containing macronutrients, micronutrients, iron solution and vitamins) and semi-solidified using phytagel. It was supplemented with different concentrations of NAA and BAP to induce multiple shoots and roots (0.5mg/L BAP with NAA at 0, 0.5, 1, 1.5, 2.0mg/L and vice-versa). The explants were sterilized, cultured in labelled test tubes and incubated at 26°C ± 2°C with 16/8 hours light/dark cycle. Data on shoot and root growth, leaf number, node number, and survival percentage were collected over three consecutive two-week periods. The data were square root transformed and subjected to ANOVA and LSD at a 5% significance level using the R statistical package. Results: Shoots emerged at 8 days and roots at 12 days after inoculation with 94% survival rate. It was discovered that for the NAA treatments, MS media supplemented with 2.00 mg/l NAA resulted in the highest shoot length (10.45cm), maximum root number (1.51), maximum shoot number (1.47) and the highest number of leaves (1.29). MS medium containing 1.00 mg/l NAA produced the highest number of nodes (1.62) and root length (14.27cm). Also, a similar growth pattern for the BAP treatments was observed. MS medium supplemented with 1.50 mg/l BAP resulted in the highest shoot length (14.98 cm), the highest number of nodes (4.60), the highest number of leaves (1.75) and the maximum shoot number (1.57). MS medium containing 0.50 mg/l BAP and 1.0 mg/l BAP generated a maximum root number (1.44) and the highest root length (13.25cm), respectively. However, the best concentration combination for maximizing shoot and root was media containing 1.5mg/l BAP combined with 0.5mg/l NAA, and 1.0mg/l NAA combined with 0.5mg/l of BAP respectively. These concentrations were optimum for in vitro growth and production of Vanilla planifolia. Significance: This study presents a standardized protocol for labs to produce clean vanilla plantlets, enhancing cultivation in Ghana and beyond. It provides insights into Vanilla planifolia's growth patterns and hormone responses, aiding future research and cultivation.

Keywords: Vanilla planifolia, In vitro propagation, plant hormones, MS media

Procedia PDF Downloads 16
41 The Role of Metaheuristic Approaches in Engineering Problems

Authors: Ferzat Anka

Abstract:

Many types of problems can be solved using traditional analytical methods. However, these methods take a long time and cause inefficient use of resources. In particular, different approaches may be required in solving complex and global engineering problems that we frequently encounter in real life. The bigger and more complex a problem, the harder it is to solve. Such problems are called Nondeterministic Polynomial time (NP-hard) in the literature. The main reasons for recommending different metaheuristic algorithms for various problems are the use of simple concepts, the use of simple mathematical equations and structures, the use of non-derivative mechanisms, the avoidance of local optima, and their fast convergence. They are also flexible, as they can be applied to different problems without very specific modifications. Thanks to these features, it can be easily embedded even in many hardware devices. Accordingly, this approach can also be used in trend application areas such as IoT, big data, and parallel structures. Indeed, the metaheuristic approaches are algorithms that return near-optimal results for solving large-scale optimization problems. This study is focused on the new metaheuristic method that has been merged with the chaotic approach. It is based on the chaos theorem and helps relevant algorithms to improve the diversity of the population and fast convergence. This approach is based on Chimp Optimization Algorithm (ChOA), that is a recently introduced metaheuristic algorithm inspired by nature. This algorithm identified four types of chimpanzee groups: attacker, barrier, chaser, and driver, and proposed a suitable mathematical model for them based on the various intelligence and sexual motivations of chimpanzees. However, this algorithm is not more successful in the convergence rate and escaping of the local optimum trap in solving high-dimensional problems. Although it and some of its variants use some strategies to overcome these problems, it is observed that it is not sufficient. Therefore, in this study, a newly expanded variant is described. In the algorithm called Ex-ChOA, hybrid models are proposed for position updates of search agents, and a dynamic switching mechanism is provided for transition phases. This flexible structure solves the slow convergence problem of ChOA and improves its accuracy in multidimensional problems. Therefore, it tries to achieve success in solving global, complex, and constrained problems. The main contribution of this study is 1) It improves the accuracy and solves the slow convergence problem of the ChOA. 2) It proposes new hybrid movement strategy models for position updates of search agents. 3) It provides success in solving global, complex, and constrained problems. 4) It provides a dynamic switching mechanism between phases. The performance of the Ex-ChOA algorithm is analyzed on a total of 8 benchmark functions, as well as a total of 2 classical and constrained engineering problems. The proposed algorithm is compared with the ChoA, and several well-known variants (Weighted-ChoA, Enhanced-ChoA) are used. In addition, an Improved algorithm from the Grey Wolf Optimizer (I-GWO) method is chosen for comparison since the working model is similar. The obtained results depict that the proposed algorithm performs better or equivalently to the compared algorithms.

Keywords: optimization, metaheuristic, chimp optimization algorithm, engineering constrained problems

Procedia PDF Downloads 48
40 Multiparticulate SR Formulation of Dexketoprofen Trometamol by Wurster Coating Technique

Authors: Bhupendra G. Prajapati, Alpesh R. Patel

Abstract:

The aim of this research work is to develop sustained release multi-particulates dosage form of Dexketoprofen trometamol, which is the pharmacologically active isomer of ketoprofen. The objective is to utilization of active enantiomer with minimal dose and administration frequency, extended release multi-particulates dosage form development for better patience compliance was explored. Drug loaded and sustained release coated pellets were prepared by fluidized bed coating principle by wurster coater. Microcrystalline cellulose as core pellets, povidone as binder and talc as anti-tacking agents were selected during drug loading while Kollicoat SR 30D as sustained release polymer, triethyl citrate as plasticizer and micronized talc as an anti-adherent were used in sustained release coating. Binder optimization trial in drug loading showed that there was increase in process efficiency with increase in the binder concentration. 5 and 7.5%w/w concentration of Povidone K30 with respect to drug amount gave more than 90% process efficiency while higher amount of rejects (agglomerates) were observed for drug layering trial batch taken with 7.5% binder. So for drug loading, optimum Povidone concentration was selected as 5% of drug substance quantity since this trial had good process feasibility and good adhesion of the drug onto the MCC pellets. 2% w/w concentration of talc with respect to total drug layering solid mass shows better anti-tacking property to remove unnecessary static charge as well as agglomeration generation during spraying process. Optimized drug loaded pellets were coated for sustained release coating from 16 to 28% w/w coating to get desired drug release profile and results suggested that 22% w/w coating weight gain is necessary to get the required drug release profile. Three critical process parameters of Wurster coating for sustained release were further statistically optimized for desired quality target product profile attributes like agglomerates formation, process efficiency, and drug release profile using central composite design (CCD) by Minitab software. Results show that derived design space consisting 1.0 to 1.2 bar atomization air pressure, 7.8 to 10.0 gm/min spray rate and 29-34°C product bed temperature gave pre-defined drug product quality attributes. Scanning Image microscopy study results were also dictate that optimized batch pellets had very narrow particle size distribution and smooth surface which were ideal properties for reproducible drug release profile. The study also focused on optimized dexketoprofen trometamol pellets formulation retain its quality attributes while administering with common vehicle, a liquid (water) or semisolid food (apple sauce). Conclusion: Sustained release multi-particulates were successfully developed for dexketoprofen trometamol which may be useful to improve acceptability and palatability of a dosage form for better patient compliance.

Keywords: dexketoprofen trometamol, pellets, fluid bed technology, central composite design

Procedia PDF Downloads 109
39 Distribution Routs Redesign through the Vehicle Problem Routing in Havana Distribution Center

Authors: Sonia P. Marrero Duran, Lilian Noya Dominguez, Lisandra Quintana Alvarez, Evert Martinez Perez, Ana Julia Acevedo Urquiaga

Abstract:

Cuban business and economic policy are in the constant update as well as facing a client ever more knowledgeable and demanding. For that reason become fundamental for companies competitiveness through the optimization of its processes and services. One of the Cuban’s pillars, which has been sustained since the triumph of the Cuban Revolution back in 1959, is the free health service to all those who need it. This service is offered without any charge under the concept of preserving human life, but it implied costly management processes and logistics services to be able to supply the necessary medicines to all the units who provide health services. One of the key actors on the medicine supply chain is the Havana Distribution Center (HDC), which is responsible for the delivery of medicines in the province; as well as the acquisition of medicines from national and international producers and its subsequent transport to health care units and pharmacies in time, and with the required quality. This HDC also carries for all distribution centers in the country. Given the eminent need to create an actor in the supply chain that specializes in the medicines supply, the possibility of centralizing this operation in a logistics service provider is analyzed. Based on this decision, pharmacies operate as clients of the logistic service center whose main function is to centralize all logistics operations associated with the medicine supply chain. The HDC is precisely the logistic service provider in Havana and it is the center of this research. In 2017 the pharmacies had affectations in the availability of medicine due to deficiencies in the distribution routes. This is caused by the fact that they are not based on routing studies, besides the long distribution cycle. The distribution routs are fixed, attend only one type of customer and there respond to a territorial location by the municipality. Taking into consideration the above-mentioned problem, the objective of this research is to optimize the routes system in the Havana Distribution Center. To accomplish this objective, the techniques applied were document analysis, random sampling, statistical inference and tools such as Ishikawa diagram and the computerized software’s: ArcGis, Osmand y MapIfnfo. As a result, were analyzed four distribution alternatives; the actual rout, by customer type, by the municipality and the combination of the two last. It was demonstrated that the territorial location alternative does not take full advantage of the transportation capacities or the distance of the trips, which leads to elevated costs breaking whit the current ways of distribution and the currents characteristics of the clients. The principal finding of the investigation was the optimum option distribution rout is the 4th one that is formed by hospitals and the join of pharmacies, stomatology clinics, polyclinics and maternal and elderly homes. This solution breaks the territorial location by the municipality and permits different distribution cycles in dependence of medicine consumption and transport availability.

Keywords: computerized geographic software, distribution, distribution routs, vehicle problem routing (VPR)

Procedia PDF Downloads 132
38 Accelerated Carbonation of Construction Materials by Using Slag from Steel and Metal Production as Substitute for Conventional Raw Materials

Authors: Karen Fuchs, Michael Prokein, Nils Mölders, Manfred Renner, Eckhard Weidner

Abstract:

Due to the high CO₂ emissions, the energy consumption for the production of sand-lime bricks is of great concern. Especially the production of quicklime from limestone and the energy consumption for hydrothermal curing contribute to high CO₂ emissions. Hydrothermal curing is carried out under a saturated steam atmosphere at about 15 bar and 200°C for 12 hours. Therefore, we are investigating the opportunity to replace quicklime and sand in the production of building materials with different types of slag as calcium-rich waste from steel production. We are also investigating the possibility of substituting conventional hydrothermal curing with CO₂ curing. Six different slags (Linz-Donawitz (LD), ferrochrome (FeCr), ladle (LS), stainless steel (SS), ladle furnace (LF), electric arc furnace (EAF)) provided by "thyssenkrupp MillServices & Systems GmbH" were ground at "Loesche GmbH". Cylindrical blocks with a diameter of 100 mm were pressed at 12 MPa. The composition of the blocks varied between pure slag and mixtures of slag and sand. The effects of pressure, temperature, and time on the CO₂ curing process were studied in a 2-liter high-pressure autoclave. Pressures between 0.1 and 5 MPa, temperatures between 25 and 140°C, and curing times between 1 and 100 hours were considered. The quality of the CO₂-cured blocks was determined by measuring the compressive strength by "Ruhrbaustoffwerke GmbH & Co. KG." The degree of carbonation was determined by total inorganic carbon (TIC) and X-ray diffraction (XRD) measurements. The pH trends in the cross-section of the blocks were monitored using phenolphthalein as a liquid pH indicator. The parameter set that yielded the best performing material was tested on all slag types. In addition, the method was scaled to steel slag-based building blocks (240 mm x 115 mm x 60 mm) provided by "Ruhrbaustoffwerke GmbH & Co. KG" and CO₂-cured in a 20-liter high-pressure autoclave. The results show that CO₂ curing of building blocks consisting of pure wetted LD slag leads to severe cracking of the cylindrical specimens. The high CO₂ uptake leads to an expansion of the specimens. However, if LD slag is used only proportionally to replace quicklime completely and sand proportionally, dimensionally stable bricks with high compressive strength are produced. The tests to determine the optimum pressure and temperature show 2 MPa and 50°C as promising parameters for the CO₂ curing process. At these parameters and after 3 h, the compressive strength of LD slag blocks reaches the highest average value of almost 50 N/mm². This is more than double that of conventional sand-lime bricks. Longer CO₂ curing times do not result in higher compressive strengths. XRD and TIC measurements confirmed the formation of carbonates. All tested slag-based bricks show higher compressive strengths compared to conventional sand-lime bricks. However, the type of slag has a significant influence on the compressive strength values. The results of the tests in the 20-liter plant agreed well with the results of the 2-liter tests. With its comparatively moderate operating conditions, the CO₂ curing process has a high potential for saving CO₂ emissions.

Keywords: CO₂ curing, carbonation, CCU, steel slag

Procedia PDF Downloads 73
37 Numerical Optimization of Cooling System Parameters for Multilayer Lithium Ion Cell and Battery Packs

Authors: Mohammad Alipour, Ekin Esen, Riza Kizilel

Abstract:

Lithium-ion batteries are a commonly used type of rechargeable batteries because of their high specific energy and specific power. With the growing popularity of electric vehicles and hybrid electric vehicles, increasing attentions have been paid to rechargeable Lithium-ion batteries. However, safety problems, high cost and poor performance in low ambient temperatures and high current rates, are big obstacles for commercial utilization of these batteries. By proper thermal management, most of the mentioned limitations could be eliminated. Temperature profile of the Li-ion cells has a significant role in the performance, safety, and cycle life of the battery. That is why little temperature gradient can lead to great loss in the performances of the battery packs. In recent years, numerous researchers are working on new techniques to imply a better thermal management on Li-ion batteries. Keeping the battery cells within an optimum range is the main objective of battery thermal management. Commercial Li-ion cells are composed of several electrochemical layers each consisting negative-current collector, negative electrode, separator, positive electrode, and positive current collector. However, many researchers have adopted a single-layer cell to save in computing time. Their hypothesis is that thermal conductivity of the layer elements is so high and heat transfer rate is so fast. Therefore, instead of several thin layers, they model the cell as one thick layer unit. In previous work, we showed that single-layer model is insufficient to simulate the thermal behavior and temperature nonuniformity of the high-capacity Li-ion cells. We also studied the effects of the number of layers on thermal behavior of the Li-ion batteries. In this work, first thermal and electrochemical behavior of the LiFePO₄ battery is modeled with 3D multilayer cell. The model is validated with the experimental measurements at different current rates and ambient temperatures. Real time heat generation rate is also studied at different discharge rates. Results showed non-uniform temperature distribution along the cell which requires thermal management system. Therefore, aluminum plates with mini-channel system were designed to control the temperature uniformity. Design parameters such as channel number and widths, inlet flow rate, and cooling fluids are optimized. As cooling fluids, water and air are compared. Pressure drop and velocity profiles inside the channels are illustrated. Both surface and internal temperature profiles of single cell and battery packs are investigated with and without cooling systems. Our results show that using optimized Mini-channel cooling plates effectively controls the temperature rise and uniformity of the single cells and battery packs. With increasing the inlet flow rate, cooling efficiency could be reached up to 60%.

Keywords: lithium ion battery, 3D multilayer model, mini-channel cooling plates, thermal management

Procedia PDF Downloads 134
36 Improving the Efficiency of a High Pressure Turbine by Using Non-Axisymmetric Endwall: A Comparison of Two Optimization Algorithms

Authors: Abdul Rehman, Bo Liu

Abstract:

Axial flow turbines are commonly designed with high loads that generate strong secondary flows and result in high secondary losses. These losses contribute to almost 30% to 50% of the total losses. Non-axisymmetric endwall profiling is one of the passive control technique to reduce the secondary flow loss. In this paper, the non-axisymmetric endwall profile construction and optimization for the stator endwalls are presented to improve the efficiency of a high pressure turbine. The commercial code NUMECA Fine/ Design3D coupled with Fine/Turbo was used for the numerical investigation, design of experiments and the optimization. All the flow simulations were conducted by using steady RANS and Spalart-Allmaras as a turbulence model. The non-axisymmetric endwalls of stator hub and shroud were created by using the perturbation law based on Bezier Curves. Each cut having multiple control points was supposed to be created along the virtual streamlines in the blade channel. For the design of experiments, each sample was arbitrarily generated based on values automatically chosen for the control points defined during parameterization. The Optimization was achieved by using two algorithms i.e. the stochastic algorithm and gradient-based algorithm. For the stochastic algorithm, a genetic algorithm based on the artificial neural network was used as an optimization method in order to achieve the global optimum. The evaluation of the successive design iterations was performed using artificial neural network prior to the flow solver. For the second case, the conjugate gradient algorithm with a three dimensional CFD flow solver was used to systematically vary a free-form parameterization of the endwall. This method is efficient and less time to consume as it requires derivative information of the objective function. The objective function was to maximize the isentropic efficiency of the turbine by keeping the mass flow rate as constant. The performance was quantified by using a multi-objective function. Other than these two classifications of the optimization methods, there were four optimizations cases i.e. the hub only, the shroud only, and the combination of hub and shroud. For the fourth case, the shroud endwall was optimized by using the optimized hub endwall geometry. The hub optimization resulted in an increase in the efficiency due to more homogenous inlet conditions for the rotor. The adverse pressure gradient was reduced but the total pressure loss in the vicinity of the hub was increased. The shroud optimization resulted in an increase in efficiency, total pressure loss and entropy were reduced. The combination of hub and shroud did not show overwhelming results which were achieved for the individual cases of the hub and the shroud. This may be caused by fact that there were too many control variables. The fourth case of optimization showed the best result because optimized hub was used as an initial geometry to optimize the shroud. The efficiency was increased more than the individual cases of optimization with a mass flow rate equal to the baseline design of the turbine. The results of artificial neural network and conjugate gradient method were compared.

Keywords: artificial neural network, axial turbine, conjugate gradient method, non-axisymmetric endwall, optimization

Procedia PDF Downloads 203
35 A Distributed Smart Battery Management System – sBMS, for Stationary Energy Storage Applications

Authors: António J. Gano, Carmen Rangel

Abstract:

Currently, electric energy storage systems for stationary applications have known an increasing interest, namely with the integration of local renewable energy power sources into energy communities. Li-ion batteries are considered the leading electric storage devices to achieve this integration, and Battery Management Systems (BMS) are decisive for their control and optimum performance. In this work, the advancement of a smart BMS (sBMS) prototype with a modular distributed topology is described. The system, still under development, has a distributed architecture with modular characteristics to operate with different battery pack topologies and charge capacities, integrating adaptive algorithms for functional state real-time monitoring and management of multicellular Li-ion batteries, and is intended for application in the context of a local energy community fed by renewable energy sources. This sBMS system includes different developed hardware units: (1) Cell monitoring units (CMUs) for interfacing with each individual cell or module monitoring within the battery pack; (2) Battery monitoring and switching unit (BMU) for global battery pack monitoring, thermal control and functional operating state switching; (3) Main management and local control unit (MCU) for local sBMS’s management and control, also serving as a communications gateway to external systems and devices. This architecture is fully expandable to battery packs with a large number of cells, or modules, interconnected in series, as the several units have local data acquisition and processing capabilities, communicating over a standard CAN bus and will be able to operate almost autonomously. The CMU units are intended to be used with Li-ion cells but can be used with other cell chemistries, with output voltages within the 2.5 to 5 V range. The different unit’s characteristics and specifications are described, including the different implemented hardware solutions. The developed hardware supports both passive and active methods for charge equalization, considered fundamental functionalities for optimizing the performance and the useful lifetime of a Li-ion battery package. The functional characteristics of the different units of this sBMS system, including different process variables data acquisition using a flexible set of sensors, can support the development of custom algorithms for estimating the parameters defining the functional states of the battery pack (State-of-Charge, State-of-Health, etc.) as well as different charge equalizing strategies and algorithms. This sBMS system is intended to interface with other systems and devices using standard communication protocols, like those used by the Internet of Things. In the future, this sBMS architecture can evolve to a fully decentralized topology, with all the units using Wi-Fi protocols and integrating a mesh network, making unnecessary the MCU unit. The status of the work in progress is reported, leading to conclusions on the system already executed, considering the implemented hardware solution, not only as fully functional advanced and configurable battery management system but also as a platform for developing custom algorithms and optimizing strategies to achieve better performance of electric energy stationary storage devices.

Keywords: Li-ion battery, smart BMS, stationary electric storage, distributed BMS

Procedia PDF Downloads 62
34 Case Study on Innovative Aquatic-Based Bioeconomy for Chlorella sorokiniana

Authors: Iryna Atamaniuk, Hannah Boysen, Nils Wieczorek, Natalia Politaeva, Iuliia Bazarnova, Kerstin Kuchta

Abstract:

Over the last decade due to climate change and a strategy of natural resources preservation, the interest for the aquatic biomass has dramatically increased. Along with mitigation of the environmental pressure and connection of waste streams (including CO2 and heat emissions), microalgae bioeconomy can supply food, feed, as well as the pharmaceutical and power industry with number of value-added products. Furthermore, in comparison to conventional biomass, microalgae can be cultivated in wide range of conditions without compromising food and feed production, thus addressing issues associated with negative social and the environmental impacts. This paper presents the state-of-the art technology for microalgae bioeconomy from cultivation process to production of valuable components and by-streams. Microalgae Chlorella sorokiniana were cultivated in the pilot-scale innovation concept in Hamburg (Germany) using different systems such as race way pond (5000 L) and flat panel reactors (8 x 180 L). In order to achieve the optimum growth conditions along with suitable cellular composition for the further extraction of the value-added components, process parameters such as light intensity, temperature and pH are continuously being monitored. On the other hand, metabolic needs in nutrients were provided by addition of micro- and macro-nutrients into a medium to ensure autotrophic growth conditions of microalgae. The cultivation was further followed by downstream process and extraction of lipids, proteins and saccharides. Lipids extraction is conducted in repeated-batch semi-automatic mode using hot extraction method according to Randall. As solvents hexane and ethanol are used at different ratio of 9:1 and 1:9, respectively. Depending on cell disruption method along with solvents ratio, the total lipids content showed significant variations between 8.1% and 13.9 %. The highest percentage of extracted biomass was reached with a sample pretreated with microwave digestion using 90% of hexane and 10% of ethanol as solvents. Proteins content in microalgae was determined by two different methods, namely: Total Kejadahl Nitrogen (TKN), which further was converted to protein content, as well as Bradford method using Brilliant Blue G-250 dye. Obtained results, showed a good correlation between both methods with protein content being in the range of 39.8–47.1%. Characterization of neutral and acid saccharides from microalgae was conducted by phenol-sulfuric acid method at two wavelengths of 480 nm and 490 nm. The average concentration of neutral and acid saccharides under the optimal cultivation conditions was 19.5% and 26.1%, respectively. Subsequently, biomass residues are used as substrate for anaerobic digestion on the laboratory-scale. The methane concentration, which was measured on the daily bases, showed some variations for different samples after extraction steps but was in the range between 48% and 55%. CO2 which is formed during the fermentation process and after the combustion in the Combined Heat and Power unit can potentially be used within the cultivation process as a carbon source for the photoautotrophic synthesis of biomass.

Keywords: bioeconomy, lipids, microalgae, proteins, saccharides

Procedia PDF Downloads 221
33 Computer Aided Design Solution Based on Genetic Algorithms for FMEA and Control Plan in Automotive Industry

Authors: Nadia Belu, Laurenţiu Mihai Ionescu, Agnieszka Misztal

Abstract:

The automotive industry is one of the most important industries in the world that concerns not only the economy, but also the world culture. In the present financial and economic context, this field faces new challenges posed by the current crisis, companies must maintain product quality, deliver on time and at a competitive price in order to achieve customer satisfaction. Two of the most recommended techniques of quality management by specific standards of the automotive industry, in the product development, are Failure Mode and Effects Analysis (FMEA) and Control Plan. FMEA is a methodology for risk management and quality improvement aimed at identifying potential causes of failure of products and processes, their quantification by risk assessment, ranking of the problems identified according to their importance, to the determination and implementation of corrective actions related. The companies use Control Plans realized using the results from FMEA to evaluate a process or product for strengths and weaknesses and to prevent problems before they occur. The Control Plans represent written descriptions of the systems used to control and minimize product and process variation. In addition Control Plans specify the process monitoring and control methods (for example Special Controls) used to control Special Characteristics. In this paper we propose a computer-aided solution with Genetic Algorithms in order to reduce the drafting of reports: FMEA analysis and Control Plan required in the manufacture of the product launch and improved knowledge development teams for future projects. The solution allows to the design team to introduce data entry required to FMEA. The actual analysis is performed using Genetic Algorithms to find optimum between RPN risk factor and cost of production. A feature of Genetic Algorithms is that they are used as a means of finding solutions for multi criteria optimization problems. In our case, along with three specific FMEA risk factors is considered and reduce production cost. Analysis tool will generate final reports for all FMEA processes. The data obtained in FMEA reports are automatically integrated with other entered parameters in Control Plan. Implementation of the solution is in the form of an application running in an intranet on two servers: one containing analysis and plan generation engine and the other containing the database where the initial parameters and results are stored. The results can then be used as starting solutions in the synthesis of other projects. The solution was applied to welding processes, laser cutting and bending to manufacture chassis for buses. Advantages of the solution are efficient elaboration of documents in the current project by automatically generating reports FMEA and Control Plan using multiple criteria optimization of production and build a solid knowledge base for future projects. The solution which we propose is a cheap alternative to other solutions on the market using Open Source tools in implementation.

Keywords: automotive industry, FMEA, control plan, automotive technology

Procedia PDF Downloads 377
32 Nano-Enabling Technical Carbon Fabrics to Achieve Improved Through Thickness Electrical Conductivity in Carbon Fiber Reinforced Composites

Authors: Angelos Evangelou, Katerina Loizou, Loukas Koutsokeras, Orestes Marangos, Giorgos Constantinides, Stylianos Yiatros, Katerina Sofocleous, Vasileios Drakonakis

Abstract:

Owing to their outstanding strength to weight properties, carbon fiber reinforced polymer (CFRPs) composites have attracted significant attention finding use in various fields (sports, automotive, transportation, etc.). The current momentum indicates that there is an increasing demand for their employment in high value bespoke applications such as avionics and electronic casings, damage sensing structures, EMI (electromagnetic interference) structures that dictate the use of materials with increased electrical conductivity both in-plane and through the thickness. Several efforts by research groups have focused on enhancing the through-thickness electrical conductivity of FRPs, in an attempt to combine the intrinsically high relative strengths exhibited with improved z-axis electrical response as well. However, only a limited number of studies deal with printing of nano-enhanced polymer inks to produce a pattern on dry fabric level that could be used to fabricate CFRPs with improved through thickness electrical conductivity. The present study investigates the employment of screen-printing process on technical dry fabrics using nano-reinforced polymer-based inks to achieve the required through thickness conductivity, opening new pathways for the application of fiber reinforced composites in niche products. Commercially available inks and in-house prepared inks reinforced with electrically conductive nanoparticles are employed, printed in different patterns. The aim of the present study is to investigate both the effect of the nanoparticle concentration as well as the droplet patterns (diameter, inter-droplet distance and coverage) to optimize printing for the desired level of conductivity enhancement in the lamina level. The electrical conductivity is measured initially at ink level to pinpoint the optimum concentrations to be employed using a “four-probe” configuration. Upon printing of the different patterns, the coverage of the dry fabric area is assessed along with the permeability of the resulting dry fabrics, in alignment with the fabrication of CFRPs that requires adequate wetting by the epoxy matrix. Results demonstrated increased electrical conductivities of the printed droplets, with increase of the conductivity from the benchmark value of 0.1 S/M to between 8 and 10 S/m. Printability of dense and dispersed patterns has exhibited promising results in terms of increasing the z-axis conductivity without inhibiting the penetration of the epoxy matrix at the processing stage of fiber reinforced composites. The high value and niche prospect of the resulting applications that can stem from CFRPs with increased through thickness electrical conductivities highlights the potential of the presented endeavor, signifying screen printing as the process to to nano-enable z-axis electrical conductivity in composite laminas. This work was co-funded by the European Regional Development Fund and the Republic of Cyprus through the Research and Innovation Foundation (Project: ENTERPRISES/0618/0013).

Keywords: CFRPs, conductivity, nano-reinforcement, screen-printing

Procedia PDF Downloads 127
31 A Semi-supervised Classification Approach for Trend Following Investment Strategy

Authors: Rodrigo Arnaldo Scarpel

Abstract:

Trend following is a widely accepted investment strategy that adopts a rule-based trading mechanism that rather than striving to predict market direction or on information gathering to decide when to buy and when to sell a stock. Thus, in trend following one must respond to market’s movements that has recently happen and what is currently happening, rather than on what will happen. Optimally, in trend following strategy, is to catch a bull market at its early stage, ride the trend, and liquidate the position at the first evidence of the subsequent bear market. For applying the trend following strategy one needs to find the trend and identify trade signals. In order to avoid false signals, i.e., identify fluctuations of short, mid and long terms and to separate noise from real changes in the trend, most academic works rely on moving averages and other technical analysis indicators, such as the moving average convergence divergence (MACD) and the relative strength index (RSI) to uncover intelligible stock trading rules following trend following strategy philosophy. Recently, some works has applied machine learning techniques for trade rules discovery. In those works, the process of rule construction is based on evolutionary learning which aims to adapt the rules to the current environment and searches for the global optimum rules in the search space. In this work, instead of focusing on the usage of machine learning techniques for creating trading rules, a time series trend classification employing a semi-supervised approach was used to early identify both the beginning and the end of upward and downward trends. Such classification model can be employed to identify trade signals and the decision-making procedure is that if an up-trend (down-trend) is identified, a buy (sell) signal is generated. Semi-supervised learning is used for model training when only part of the data is labeled and Semi-supervised classification aims to train a classifier from both the labeled and unlabeled data, such that it is better than the supervised classifier trained only on the labeled data. For illustrating the proposed approach, it was employed daily trade information, including the open, high, low and closing values and volume from January 1, 2000 to December 31, 2022, of the São Paulo Exchange Composite index (IBOVESPA). Through this time period it was visually identified consistent changes in price, upwards or downwards, for assigning labels and leaving the rest of the days (when there is not a consistent change in price) unlabeled. For training the classification model, a pseudo-label semi-supervised learning strategy was used employing different technical analysis indicators. In this learning strategy, the core is to use unlabeled data to generate a pseudo-label for supervised training. For evaluating the achieved results, it was considered the annualized return and excess return, the Sortino and the Sharpe indicators. Through the evaluated time period, the obtained results were very consistent and can be considered promising for generating the intended trading signals.

Keywords: evolutionary learning, semi-supervised classification, time series data, trading signals generation

Procedia PDF Downloads 50
30 Sustainability in Higher Education: A Case of Transition Management from a Private University in Turkey (Ongoing Study)

Authors: Ayse Collins

Abstract:

The Agenda 2030 puts Higher Education Institutions (HEIs) in the situation where they should emphasize ways to promote sustainability accordingly. However, it is still unclear: a) how sustainability is understood, and b) which actions have been taken in both discourse and practice by HEIs regarding the three pillars of sustainability, society, environment, and economy. There are models of sustainable universities developed by different authors from different countries; For Example, The Global Reporting Initiative (GRI) methodology which offers a variety of indicators to diagnose performance. However, these models have never been developed for universities in particular. Any model, in this sense, cannot be completed adequately without defining the appropriate tools to measure, analyze and control the performance of initiatives. There is a need to conduct researches in different universities from different countries to understand where we stand in terms of sustainable higher education. Therefore, this study aims at exploring the actions taken by a university in Ankara, Turkey, since Agenda 2030 should consider localizing its objectives and targets according to a certain geography. This university just announced 2021-2022 as “Sustainability Year.” Therefore, this research is a multi-methodology longitudinal study and uses the theoretical framework of the organization and transition management (TM). It is designed to examine the activities as being strategic, tactical, operational, and reflexive in nature and covers the six main aspects: academic community, administrative staff, operations and services, teaching, research, and extension. The preliminary research will answer the role of the top university governance, perception of the stakeholders (students, instructors, administrative and support staff) regarding sustainability, and the level of achievement at the mid-evaluation and final, end of year evaluation. TM Theory is a multi-scale, multi-actor, process-oriented approach with the analytical framework to explore and promote change in social systems. Therefore, the stages and respective methodology for collecting data in this research is: Pre-development Stage: a) semi-structured interviews with university governance, c) open-ended survey with faculty, students, and administrative staff d) Semi-structured interviews with support staff, and e) analysis of current secondary data for sustainability. Take-off Stage: a) semi-structured interviews with university governance, faculty, students, administrative and support staff, b) analysis of secondary data. Breakthrough stabilization a) survey with all stakeholders at the university, b) secondary data analysis by using selected indicators for the first sustainability report for universities The findings from the predevelopment stage highlight how stakeholders, coming from different faculties, different disciplines with different identities and characteristics, face the sustainability challenge differently. Though similar sustainable development goals ((social, environmental, and economic) are set in the institution, there are differences across disciplines and among different stakeholders, which need to be considered to reach the optimum goal. It is believed that the results will help changes in HEIs organizational culture to embed sustainability values in their strategic planning, academic and managerial work by putting enough time and resources to be successful in coping with sustainability.

Keywords: higher education, sustainability, sustainability auditing, transition management

Procedia PDF Downloads 85
29 Federalizing the Philippines: What Does It Mean for the Igorot Indigenous Peoples?

Authors: Shierwin Agagen Cabunilas

Abstract:

The unitary form of Philippine government has built a tradition of bureaucracy that strengthened oligarch and clientele politics. Consequently, the Philippines is lagged behind development. There is so much poverty, unemployment, and inadequate social services. In addition, it seems that the rights of national ethnic minority groups like the Igorots to develop their political and economic interests, linguistic and cultural heritage are neglected. Given these circumstances, a paradigm shift is inevitable. The author advocates a transition from a unitary to a federal system of government. Contrary to the notion that a unitary system facilitates better governance, it actually stifles it. As a unitary government, the Philippines seems (a) to exhibit incompetence in delivering efficient, necessary services to the people and (b) to exclude the minority from political participation and policy making. This shows that Philippine unitary system is highly centralized and operates from a top-bottom scheme. However, a federal system encourages decentralization, plurality and political participation. In my view, federalism is beneficial to the Philippine society and congenial to the Igorot indigenous peoples insofar as participative decision-making and development goals are concerned. This research employs critical and constructive analyses. The former interprets some complex practices of Philippine politics while the latter investigates how theories of federalism can be appropriated to deal with political deficits, ethnic diversity, and indigenous peoples’ rights to self-determination. The topic is developed accordingly: First, the author briefly examines the unitary structure of the Philippines and its impact on inter-governmental affairs and processes, asserting that bureaucracy and corruption, for example, are counterproductive to a participative political life, to economic development and to the recognition of national ethnic minorities. Second, he scrutinizes why federalism might transform this. Here, he assesses various opposing philosophical contentions on federal system in managing ethnically diverse society, like the Philippines, and argue that decentralization of political power, economic and cultural developments are reasons to exit from unitary government. Third, he suggests that federalism can be instrumental to Igorots self-determination. Self-determination is neither opposed to national development nor to the ideals of democracy – liberty, justice, solidarity. For example, as others have already noted, a politics in the vernacular facilitates greater participation among the people. Hence, there is a greater chance to arrive at policies that serve the interest of the people. Some may wary that decentralization disintegrates a nation. According to the author, however, the recognition of minority rights which includes self-determination may promote filial devotion to the state. If Igorot indigenous peoples have access to suitable institutions to determine their political life, economic goals, social needs, i.e., education, culture, language, chances are it moves the country forward to development fostering national unity. Remarkably, federal system thus best responds to the Philippines’s democratic and development deficits. Federalism can also significantly rectify the practices that oppress and dislocate national ethnic minorities as it ensures the creation of localized institutions for optimum political, economic, cultural determination and maximizes representation in the public sphere.

Keywords: federalism, Igorot, indigenous peoples, self-determination

Procedia PDF Downloads 299
28 An Aptasensor Based on Magnetic Relaxation Switch and Controlled Magnetic Separation for the Sensitive Detection of Pseudomonas aeruginosa

Authors: Fei Jia, Xingjian Bai, Xiaowei Zhang, Wenjie Yan, Ruitong Dai, Xingmin Li, Jozef Kokini

Abstract:

Pseudomonas aeruginosa is a Gram-negative, aerobic, opportunistic human pathogen that is present in the soil, water, and food. This microbe has been recognized as a representative food-borne spoilage bacterium that can lead to many types of infections. Considering the casualties and property loss caused by P. aeruginosa, the development of a rapid and reliable technique for the detection of P. aeruginosa is crucial. The whole-cell aptasensor, an emerging biosensor using aptamer as a capture probe to bind to the whole cell, for food-borne pathogens detection has attracted much attention due to its convenience and high sensitivity. Here, a low-field magnetic resonance imaging (LF-MRI) aptasensor for the rapid detection of P. aeruginosa was developed. The basic detection principle of the magnetic relaxation switch (MRSw) nanosensor lies on the ‘T₂-shortening’ effect of magnetic nanoparticles in NMR measurements. Briefly speaking, the transverse relaxation time (T₂) of neighboring water protons get shortened when magnetic nanoparticles are clustered due to the cross-linking upon the recognition and binding of biological targets, or simply when the concentration of the magnetic nanoparticles increased. Such shortening is related to both the state change (aggregation or dissociation) and the concentration change of magnetic nanoparticles and can be detected using NMR relaxometry or MRI scanners. In this work, two different sizes of magnetic nanoparticles, which are 10 nm (MN₁₀) and 400 nm (MN₄₀₀) in diameter, were first immobilized with anti- P. aeruginosa aptamer through 1-Ethyl-3-(3-dimethylaminopropyl) carbodiimide (EDC)/N-hydroxysuccinimide (NHS) chemistry separately, to capture and enrich the P. aeruginosa cells. When incubating with the target, a ‘sandwich’ (MN₁₀-bacteria-MN₄₀₀) complex are formed driven by the bonding of MN400 with P. aeruginosa through aptamer recognition, as well as the conjugate aggregation of MN₁₀ on the surface of P. aeruginosa. Due to the different magnetic performance of the MN₁₀ and MN₄₀₀ in the magnetic field caused by their different saturation magnetization, the MN₁₀-bacteria-MN₄₀₀ complex, as well as the unreacted MN₄₀₀ in the solution, can be quickly removed by magnetic separation, and as a result, only unreacted MN₁₀ remain in the solution. The remaining MN₁₀, which are superparamagnetic and stable in low field magnetic field, work as a signal readout for T₂ measurement. Under the optimum condition, the LF-MRI platform provides both image analysis and quantitative detection of P. aeruginosa, with the detection limit as low as 100 cfu/mL. The feasibility and specificity of the aptasensor are demonstrated in detecting real food samples and validated by using plate counting methods. Only two steps and less than 2 hours needed for the detection procedure, this robust aptasensor can detect P. aeruginosa with a wide linear range from 3.1 ×10² cfu/mL to 3.1 ×10⁷ cfu/mL, which is superior to conventional plate counting method and other molecular biology testing assay. Moreover, the aptasensor has a potential to detect other bacteria or toxins by changing suitable aptamers. Considering the excellent accuracy, feasibility, and practicality, the whole-cell aptasensor provides a promising platform for a quick, direct and accurate determination of food-borne pathogens at cell-level.

Keywords: magnetic resonance imaging, meat spoilage, P. aeruginosa, transverse relaxation time

Procedia PDF Downloads 123
27 Application and Aspects of Biometeorology in Inland Open Water Fisheries Management in the Context of Changing Climate: Status and Research Needs

Authors: U.K. Sarkar, G. Karnatak, P. Mishal, Lianthuamluaia, S. Kumari, S.K. Das, B.K. Das

Abstract:

Inland open water fisheries provide food, income, livelihood and nutritional security to millions of fishers across the globe. However, the open water ecosystem and fisheries are threatened due to climate change and anthropogenic pressures, which are more visible in the recent six decades, making the resources vulnerable. Understanding the interaction between meteorological parameters and inland fisheries is imperative to develop mitigation and adaptation strategies. As per IPCC 5th assessment report, the earth is warming at a faster rate in recent decades. Global mean surface temperature (GMST) for the decade 2006–2015 (0.87°C) was 6 times higher than the average over the 1850–1900 period. The direct and indirect impacts of climatic parameters on the ecology of fisheries ecosystem have a great bearing on fisheries due to alterations in fish physiology. The impact of meteorological factors on ecosystem health and fish food organisms brings about changes in fish diversity, assemblage, reproduction and natural recruitment. India’s average temperature has risen by around 0.7°C during 1901–2018. The studies show that the mean air temperature in the Ganga basin has increased in the range of 0.20 - 0.47 °C and annual rainfall decreased in the range of 257-580 mm during the last three decades. The studies clearly indicate visible impacts of climatic and environmental factors on inland open water fisheries. Besides, a significant reduction in-depth and area (37.20–57.68% reduction), diversity of natural indigenous fish fauna (ranging from 22.85 to 54%) in wetlands and progression of trophic state from mesotrophic to eutrophic were recorded. In this communication, different applications of biometeorology in inland fisheries management with special reference to the assessment of ecosystem and species vulnerability to climatic variability and change have been discussed. Further, the paper discusses the impact of climate anomaly and extreme climatic events on inland fisheries and emphasizes novel modeling approaches for understanding the impact of climatic and environmental factors on reproductive phenology for identification of climate-sensitive/resilient fish species for the adoption of climate-smart fisheries in the future. Adaptation and mitigation strategies to enhance fish production and the role of culture-based fisheries and enclosure culture in converting sequestered carbon into blue carbon have also been discussed. In general, the type and direction of influence of meteorological parameters on fish biology in open water fisheries ecosystems are not adequately understood. The optimum range of meteorological parameters for sustaining inland open water fisheries is yet to be established. Therefore, the application of biometeorology in inland fisheries offers ample scope for understanding the dynamics in changing climate, which would help to develop a database on such least, addressed research frontier area. This would further help to project fisheries scenarios in changing climate regimes and develop adaptation and mitigation strategies to cope up with adverse meteorological factors to sustain fisheries and to conserve aquatic ecosystem and biodiversity.

Keywords: biometeorology, inland fisheries, aquatic ecosystem, modeling, India

Procedia PDF Downloads 167