Search results for: micro scale parameters
210 Persistent Organic Pollutant Level in Challawa River Basin of Kano State, Nigeria
Authors: Abdulkadir Sarauta
Abstract:
Almost every type of industrial process involves the release of trace quantity of toxic organic and inorganic compound that up in receiving water bodies, this study was aimed at assessing the Persistent Organic Pollutant Level in Challawa River Basin of Kano State, Nigeria. And the research formed the basis of identifying the presence of PCBs and PAHs in receiving water bodies in the study area, assessing the PCBs and PAHs concentration in receiving water body of Challawa system, evaluate the concentration level of PCBs and PAHs in fishes in the study area, determine the concentration level of PCBs and PAHs in crops irrigated in the study area as well as compare the concentration of PCBs and PAHs with the acceptable limit set by Nigerian, EU, U.S and WHO standard. Data were collected using reconnaissance survey, site inspection, field survey, laboratory experiment as well as secondary data source. A total of 78 samples were collected through stratified systematic random sampling (i.e., 26 samples for each of water, crops and fish) three sampling points were chosen and designated A, B and C along the stretch of the river (i.e. up, middle, and downstream) from Yan Danko Bridge to Tambirawa bridge. The result shows that the Polychlorinated biphenyls (PCBs) was not detected while, polycyclic aromatic hydrocarbons (PAHs) was detected in the whole samples analysed at the trench of Challawa River basin in order to assess the contribution of human activities to global environmental pollution. The total concentrations of ΣPAH and ΣPCB ranges between 0.001 to 0.087mg/l and 0.00 to 0.00mg/l of water samples While, crops samples ranges between 2.0ppb to 8.1ppb and fish samples ranges from 2.0 to 6.7ppb.The whole samples are polluted because most of the parameters analyzed exceed the threshold limits set by WHO, Nigerian, U.S and EU standard. The analytical results revealed that some chemicals are present in water, crops and fishes are significantly very high at Zamawa village which is very close to Challawa industrial estate and also is main effluent discharge point and drinking water around study area is not potable for consumption. Analysis of Variance was obtained by Bartlett’s test performance. There is only significant difference in water because the P < 0.05 level of significant, But there is no difference in crops concentration they have the same performance, likes wise in the fishes. It is said to be of concern to health hazard which will increase incidence of tumor related diseases such as skin, lungs, bladder, gastrointestinal cancer, this show there is high failure of pollution abatement measures in the area. In conclusion, it can be said that industrial activities and effluent has impact on Challawa River basin and its environs especially those that are living in the immediate surroundings. Arising from the findings of this research some recommendations were made the industries should treat their liquid properly by installing modern treatment plants.Keywords: Challawa River Basin, organic, persistent, pollutant
Procedia PDF Downloads 575209 Application of Laser-Induced Breakdown Spectroscopy for the Evaluation of Concrete on the Construction Site and in the Laboratory
Authors: Gerd Wilsch, Tobias Guenther, Tobias Voelker
Abstract:
In view of the ageing of vital infrastructure facilities, a reliable condition assessment of concrete structures is becoming of increasing interest for asset owners to plan timely and appropriate maintenance and repair interventions. For concrete structures, reinforcement corrosion induced by penetrating chlorides is the dominant deterioration mechanism affecting the serviceability and, eventually, structural performance. The determination of the quantitative chloride ingress is required not only to provide valuable information on the present condition of a structure, but the data obtained can also be used for the prediction of its future development and associated risks. At present, wet chemical analysis of ground concrete samples by a laboratory is the most common test procedure for the determination of the chloride content. As the chloride content is expressed by the mass of the binder, the analysis should involve determination of both the amount of binder and the amount of chloride contained in a concrete sample. This procedure is laborious, time-consuming, and costly. The chloride profile obtained is based on depth intervals of 10 mm. LIBS is an economically viable alternative providing chloride contents at depth intervals of 1 mm or less. It provides two-dimensional maps of quantitative element distributions and can locate spots of higher concentrations like in a crack. The results are correlated directly to the mass of the binder, and it can be applied on-site to deliver instantaneous results for the evaluation of the structure. Examples for the application of the method in the laboratory for the investigation of diffusion and migration of chlorides, sulfates, and alkalis are presented. An example for the visualization of the Li transport in concrete is also shown. These examples show the potential of the method for a fast, reliable, and automated two-dimensional investigation of transport processes. Due to the better spatial resolution, more accurate input parameters for model calculations are determined. By the simultaneous detection of elements such as carbon, chlorine, sodium, and potassium, the mutual influence of the different processes can be determined in only one measurement. Furthermore, the application of a mobile LIBS system in a parking garage is demonstrated. It uses a diode-pumped low energy laser (3 mJ, 1.5 ns, 100 Hz) and a compact NIR spectrometer. A portable scanner allows a two-dimensional quantitative element mapping. Results show the quantitative chloride analysis on wall and floor surfaces. To determine the 2-D distribution of harmful elements (Cl, C), concrete cores were drilled, split, and analyzed directly on-site. Results obtained were compared and verified with laboratory measurements. The results presented show that the LIBS method is a valuable addition to the standard procedures - the wet chemical analysis of ground concrete samples. Currently, work is underway to develop a technical code of practice for the application of the method for the determination of chloride concentration in concrete.Keywords: chemical analysis, concrete, LIBS, spectroscopy
Procedia PDF Downloads 105208 Damages of Highway Bridges in Thailand during the 2014-Chiang Rai Earthquake
Authors: Rajwanlop Kumpoopong, Sukit Yindeesuk, Pornchai Silarom
Abstract:
On May 5, 2014, an earthquake of magnitude 6.3 Richter hit the Northern part of Thailand. The epicenter was in Phan District, Chiang Rai Province. This earthquake or the so-called 2014-Chiang Rai Earthquake is the strongest ground shaking that Thailand has ever been experienced in her modern history. The 2014-Chiang Rai Earthquake confirms the geological evidence, which has previously been ignored by most engineers, that earthquakes of considerable magnitudes 6 to 7 Richter can occurr within the country. This promptly stimulates authorized agencies to pay more attention at the safety of their assets and promotes the comprehensive review of seismic resistance design of their building structures. The focus of this paper is to summarize the damages of highway bridges as a result of the 2014-Chiang Rai ground shaking, the remedy actions, and the research needs. The 2014-Chiang Rai Earthquake caused considerable damages to nearby structures such as houses, schools, and temples. The ground shaking, however, caused damage to only one highway bridge, Mae Laos Bridge, located several kilometers away from the epicenter. The damage of Mae Laos Bridge was in the form of concrete spalling caused by pounding of cap beam on the deck structure. The damage occurred only at the end or abutment span. The damage caused by pounding is not a surprise, but the pounding by only one bridge requires further investigation and discussion. Mae Laos Bridge is a river crossing bridge with relatively large approach structure. In as much, the approach structure is confined by strong retaining walls. This results in a rigid-like approach structure which vibrates at the acceleration approximately equal to the ground acceleration during the earthquake and exerts a huge force to the abutment causing the pounding of cap beam on the deck structure. Other bridges nearby have relatively small approach structures, and therefore have no capability to generate pounding. The effect of mass of the approach structure on pounding of cap beam on the deck structure is also evident by the damage of one pedestrian bridge in front of Thanthong Wittaya School located 50 meters from Mae Laos Bridge. The width of the approach stair of this bridge is wider than the typical one to accommodate the stream of students during pre- and post-school times. This results in a relatively large mass of the approach stair which in turn exerts a huge force to the pier causing pounding of cap beam on the deck structure during ground shaking. No sign of pounding was observed for a typical pedestrian bridge located at another end of Mae Laos Bridge. Although pounding of cap beam on the deck structure of the above mentioned bridges does not cause serious damage to bridge structure, this incident promotes the comprehensive review of seismic resistance design of highway bridges in Thailand. Given a proper mass and confinement of the approach structure, the pounding of cap beam on the deck structure can be easily excited even at the low to moderate ground shaking. In as much, if the ground shaking becomes stronger, the pounding is certainly more powerful. This may cause the deck structure to be unseated and fall off in the case of unrestrained bridge. For the bridge with restrainer between cap beam and the deck structure, the restrainer may prevent the deck structure from falling off. However, preventing free movement of the pier by the restrainer may damage the pier itself. Most highway bridges in Thailand have dowel bars embedded connecting cap beam and the deck structure. The purpose of the existence of dowel bars is, however, not intended for any seismic resistance. Their ability to prevent the deck structure from unseating and their effect on the potential damage of the pier should be evaluated. In response to this expected situation, Thailand Department of Highways (DOH) has set up a team to revise the standard practices for the seismic resistance design of highway bridges in Thailand. In addition, DOH has also funded the research project 'Seismic Resistance Evaluation of Pre- and Post-Design Modifications of DOH’s Bridges' with the scope of full-scale tests of single span bridges under reversed cyclic static loadings for both longitudinal and transverse directions and computer simulations to evaluate the seismic performance of the existing bridges and the design modification bridges. The research is expected to start in October, 2015.Keywords: earthquake, highway bridge, Thailand, damage, pounding, seismic resistance
Procedia PDF Downloads 290207 The Use of Vasopressin in the Management of Severe Traumatic Brain Injury: A Narrative Review
Authors: Nicole Selvi Hill, Archchana Radhakrishnan
Abstract:
Introduction: Traumatic brain injury (TBI) is a leading cause of mortality among trauma patients. In the management of TBI, the main principle is avoiding cerebral ischemia, as this is a strong determiner of neurological outcomes. The use of vasoactive drugs, such as vasopressin, has an important role in maintaining cerebral perfusion pressure to prevent secondary brain injury. Current guidelines do not suggest a preferred vasoactive drug to administer in the management of TBI, and there is a paucity of information on the therapeutic potential of vasopressin following TBI. Vasopressin is also an endogenous anti-diuretic hormone (AVP), and pathways mediated by AVP play a large role in the underlying pathological processes of TBI. This creates an overlap of discussion regarding the therapeutic potential of vasopressin following TBI. Currently, its popularity lies in vasodilatory and cardiogenic shock in the intensive care setting, with increasing support for its use in haemorrhagic and septic shock. Methodology: This is a review article based on a literature review. An electronic search was conducted via PubMed, Cochrane, EMBASE, and Google Scholar. The aim was to identify clinical studies looking at the therapeutic administration of vasopressin in severe traumatic brain injury. The primary aim was to look at the neurological outcome of patients. The secondary aim was to look at surrogate markers of cerebral perfusion measurements, such as cerebral perfusion pressure, cerebral oxygenation, and cerebral blood flow. Results: Eight papers were included in the final number. Three were animal studies; five were human studies, comprised of three case reports, one retrospective review of data, and one randomised control trial. All animal studies demonstrated the benefits of vasopressors in TBI management. One animal study showed the superiority of vasopressin in reducing intracranial pressure and increasing cerebral oxygenation over a catecholaminergic vasopressor, phenylephrine. All three human case reports were supportive of vasopressin as a rescue therapy in catecholaminergic-resistant hypotension. The retrospective review found vasopressin did not increase cerebral oedema in TBI patients compared to catecholaminergic vasopressors; and demonstrated a significant reduction in the requirements of hyperosmolar therapy in patients that received vasopressin. The randomised control trial results showed no significant differences in primary and secondary outcomes between TBI patients receiving vasopressin versus those receiving catecholaminergic vasopressors. Apart from the randomised control trial, the studies included are of low-level evidence. Conclusion: Studies favour vasopressin within certain parameters of cerebral function compared to control groups. However, the neurological outcomes of patient groups are not known, and animal study results are difficult to extrapolate to humans. It cannot be said with certainty whether vasopressin’s benefits stand above usage of other vasoactive drugs due to the weaknesses of the evidence. Further randomised control trials, which are larger, standardised, and rigorous, are required to improve knowledge in this field.Keywords: catecholamines, cerebral perfusion pressure, traumatic brain injury, vasopressin, vasopressors
Procedia PDF Downloads 67206 Radish Sprout Growth Dependency on LED Color in Plant Factory Experiment
Authors: Tatsuya Kasuga, Hidehisa Shimada, Kimio Oguchi
Abstract:
Recent rapid progress in ICT (Information and Communication Technology) has advanced the penetration of sensor networks (SNs) and their attractive applications. Agriculture is one of the fields well able to benefit from ICT. Plant factories control several parameters related to plant growth in closed areas such as air temperature, humidity, water, culture medium concentration, and artificial lighting by using computers and AI (Artificial Intelligence) is being researched in order to obtain stable and safe production of vegetables and medicinal plants all year anywhere, and attain self-sufficiency in food. By providing isolation from the natural environment, a plant factory can achieve higher productivity and safe products. However, the biggest issue with plant factories is the return on investment. Profits are tenuous because of the large initial investments and running costs, i.e. electric power, incurred. At present, LED (Light Emitting Diode) lights are being adopted because they are more energy-efficient and encourage photosynthesis better than the fluorescent lamps used in the past. However, further cost reduction is essential. This paper introduces experiments that reveal which color of LED lighting best enhances the growth of cultured radish sprouts. Radish sprouts were cultivated in the experimental environment formed by a hydroponics kit with three cultivation shelves (28 samples per shelf) each with an artificial lighting rack. Seven LED arrays of different color (white, blue, yellow green, green, yellow, orange, and red) were compared with a fluorescent lamp as the control. Lighting duration was set to 12 hours a day. Normal water with no fertilizer was circulated. Seven days after germination, the length, weight and area of leaf of each sample were measured. Electrical power consumption for all lighting arrangements was also measured. Results and discussions: As to average sample length, no clear difference was observed in terms of color. As regards weight, orange LED was less effective and the difference was significant (p < 0.05). As to leaf area, blue, yellow and orange LEDs were significantly less effective. However, all LEDs offered higher productivity per W consumed than the fluorescent lamp. Of the LEDs, the blue LED array attained the best results in terms of length, weight and area of leaf per W consumed. Conclusion and future works: An experiment on radish sprout cultivation under 7 different color LED arrays showed no clear difference in terms of sample size. However, if electrical power consumption is considered, LEDs offered about twice the growth rate of the fluorescent lamp. Among them, blue LEDs showed the best performance. Further cost reduction e.g. low power lighting remains a big issue for actual system deployment. An automatic plant monitoring system with sensors is another study target.Keywords: electric power consumption, LED color, LED lighting, plant factory
Procedia PDF Downloads 188205 Antimicrobial Properties of SEBS Compounds with Zinc Oxide and Zinc Ions
Authors: Douglas N. Simões, Michele Pittol, Vanda F. Ribeiro, Daiane Tomacheski, Ruth M. C. Santana
Abstract:
The increasing demand of thermoplastic elastomers is related to the wide range of applications, such as automotive, footwear, wire and cable industries, adhesives and medical devices, cell phones, sporting goods, toys and others. These materials are susceptible to microbial attack. Moisture and organic matter present in some areas (such as shower area and sink), provide favorable conditions for microbial proliferation, which contributes to the spread of diseases and reduces the product life cycle. Compounds based on SEBS copolymers, poly(styrene-b-(ethylene-co-butylene)-b-styrene, are a class of thermoplastic elastomers (TPE), fully recyclable and largely used in domestic appliances like bath mats and tooth brushes (soft touch). Zinc oxide and zinc ions loaded in personal and home care products have become common in the last years due to its biocidal effect. In that sense, the aim of this study was to evaluate the effect of zinc as antimicrobial agent in compounds based on SEBS/polypropylene/oil/ calcite for use as refrigerator seals (gaskets), bath mats and sink squeegee. Two zinc oxides from different suppliers (ZnO-Pe and ZnO-WR) and one masterbatch of zinc ions (M-Zn-ion) were used in proportions of 0%, 1%, 3% and 5%. The compounds were prepared using a co-rotating double screw extruder (L/D ratio of 40/1 and 16 mm screw diameter). The extrusion parameters were kept constant for all materials. Tests specimens were prepared using the injection molding machine. A compound with no antimicrobial additive (standard) was also tested. Compounds were characterized by physical (density), mechanical (hardness and tensile properties) and rheological properties (melt flow rate - MFR). The Japan Industrial Standard (JIS) Z 2801:2010 was applied to evaluate antibacterial properties against Staphylococcus aureus (S. aureus) and Escherichia coli (E. coli). The Brazilian Association of Technical Standards (ABNT) NBR 15275:2014 were used to evaluate antifungal properties against Aspergillus niger (A. niger), Aureobasidium pullulans (A. pullulans), Candida albicans (C. albicans), and Penicillium chrysogenum (P. chrysogenum). The microbiological assay showed a reduction over 42% in E. coli and over 49% in S. aureus population. The tests with fungi showed inconclusive results because the sample without zinc also demonstrated an inhibition of fungal development when tested against A. pullulans, C. albicans and P. chrysogenum. In addition, the zinc loaded samples showed worse results than the standard sample when tested against A. niger. The zinc addition did not show significant variation in mechanical properties. However, the density values increased with the rise in ZnO additives concentration, and had a little decrease in M-Zn-ion samples. Also, there were differences in the MFR results in all compounds compared to the standard.Keywords: antimicrobial, home device, SEBS, zinc
Procedia PDF Downloads 324204 Fulfillment of Models of Prenatal Care in Adolescents from Mexico and Chile
Authors: Alejandra Sierra, Gloria Valadez, Adriana Dávalos, Mirliana Ramírez
Abstract:
For years, the Pan American Health Organization/World Health Organization and other organizations have made efforts to the improve access and the quality of prenatal care as part of comprehensive programs for maternal and neonatal health, the standards of care have been renewed in order to migrate from a medical perspective to a holistic perspective. However, despite the efforts currently antenatal care models have not been verified by a scientific evaluation in order to determine their effectiveness. The teenage pregnancy is considered as a very important phenomenon since it has been strongly associated with inequalities, poverty and the lack of gender quality; therefore it is important to analyze the antenatal care that’s been given, including not only the clinical intervention but also the activities surrounding the advertising and the health education. In this study, the objective was to describe if the previously established activities (on the prenatal care models) are being performed in the care of pregnant teenagers attending prenatal care in health institutions in two cities in México and Chile during 2013. Methods: Observational and descriptive study, of a transversal cohort. 170 pregnant women (13-19 years) were included in prenatal care in two health institutions (100 women from León-Mexico and 70 from Chile-Coquimbo). Data collection: direct survey, perinatal clinical record card which was used as checklists: WHO antenatal care model WHO-2003, Official Mexican Standard NOM-007-SSA2-1993 and Personalized Service Manual on Reproductive Process- Chile Crece Contigo; for data analysis descriptive statistics were used. The project was approved by the relevant ethics committees. Results: Regarding the fulfillment of interventions focused on physical, gynecological exam, immunizations, monitoring signs and biochemical parameters in both groups was met by more than 84%; the activities of guidance and counseling pregnant teenagers in Leon compliance rates were below 50%, on the other hand, although pregnant women in Coquimbo had a higher percentage of compliance, no one reached 100%. The topics that less was oriented were: family planning, signs and symptoms of complications and labor. Conclusions: Although the coverage of the interventions indicated in the prenatal care models was high, there were still shortcomings in the fulfillment of activities to orientation, education and health promotion. Deficiencies in adherence to prenatal care guidelines could be due to different circumstances such as lack of registration or incomplete filling of medical records, lack of medical supplies or health personnel, absences of people at prenatal check-up appointments, among many others. Therefore, studies are required to evaluate the quality of prenatal care and the effectiveness of existing models, considering the role of the different actors (pregnant women, professionals and health institutions) involved in the functionality and quality of prenatal care models, in order to create strategies to design or improve the application of a complete process of promotion and prevention of maternal and child health as well as sexual and reproductive health in general.Keywords: adolescent health, health systems, maternal health, primary health care
Procedia PDF Downloads 205203 The Effect of Online Analyzer Malfunction on the Performance of Sulfur Recovery Unit and Providing a Temporary Solution to Reduce the Emission Rate
Authors: Hamid Reza Mahdipoor, Mehdi Bahrami, Mohammad Bodaghi, Seyed Ali Akbar Mansoori
Abstract:
Nowadays, with stricter limitations to reduce emissions, considerable penalties are imposed if pollution limits are exceeded. Therefore, refineries, along with focusing on improving the quality of their products, are also focused on producing products with the least environmental impact. The duty of the sulfur recovery unit (SRU) is to convert H₂S gas coming from the upstream units to elemental sulfur and minimize the burning of sulfur compounds to SO₂. The Claus process is a common process for converting H₂S to sulfur, including a reaction furnace followed by catalytic reactors and sulfur condensers. In addition to a Claus section, SRUs usually consist of a tail gas treatment (TGT) section to decrease the concentration of SO₂ in the flue gas below the emission limits. To operate an SRU properly, the flow rate of combustion air to the reaction furnace must be adjusted so that the Claus reaction is performed according to stoichiometry. Accurate control of the air demand leads to an optimum recovery of sulfur during the flow and composition fluctuations in the acid gas feed. Therefore, the major control system in the SRU is the air demand control loop, which includes a feed-forward control system based on predetermined feed flow rates and a feed-back control system based on the signal from the tail gas online analyzer. The use of online analyzers requires compliance with the installation and operation instructions. Unfortunately, most of these analyzers in Iran are out of service for different reasons, like the low importance of environmental issues and a lack of access to after-sales services due to sanctions. In this paper, an SRU in Iran was simulated and calibrated using industrial experimental data. Afterward, the effect of the malfunction of the online analyzer on the performance of SRU was investigated using the calibrated simulation. The results showed that an increase in the SO₂ concentration in the tail gas led to an increase in the temperature of the reduction reactor in the TGT section. This increase in temperature caused the failure of TGT and increased the concentration of SO₂ from 750 ppm to 35,000 ppm. In addition, the lack of a control system for the adjustment of the combustion air caused further increases in SO₂ emissions. In some processes, the major variable cannot be controlled directly due to difficulty in measurement or a long delay in the sampling system. In these cases, a secondary variable, which can be measured more easily, is considered to be controlled. With the correct selection of this variable, the main variable is also controlled along with the secondary variable. This strategy for controlling a process system is referred to as inferential control" and is considered in this paper. Therefore, a sensitivity analysis was performed to investigate the sensitivity of other measurable parameters to input disturbances. The results revealed that the output temperature of the first Claus reactor could be used for inferential control of the combustion air. Applying this method to the operation led to maximizing the sulfur recovery in the Claus section.Keywords: sulfur recovery, online analyzer, inferential control, SO₂ emission
Procedia PDF Downloads 75202 Analytical Technique for Definition of Internal Forces in Links of Robotic Systems and Mechanisms with Statically Indeterminate and Determinate Structures Taking into Account the Distributed Dynamical Loads and Concentrated Forces
Authors: Saltanat Zhilkibayeva, Muratulla Utenov, Nurzhan Utenov
Abstract:
The distributed inertia forces of complex nature appear in links of rod mechanisms within the motion process. Such loads raise a number of problems, as the problems of destruction caused by a large force of inertia; elastic deformation of the mechanism can be considerable, that can bring the mechanism out of action. In this work, a new analytical approach for the definition of internal forces in links of robotic systems and mechanisms with statically indeterminate and determinate structures taking into account the distributed inertial and concentrated forces is proposed. The relations between the intensity of distributed inertia forces and link weight with geometrical, physical and kinematic characteristics are determined in this work. The distribution laws of inertia forces and dead weight make it possible at each position of links to deduce the laws of distribution of internal forces along the axis of the link, in which loads are found at any point of the link. The approximation matrixes of forces of an element under the action of distributed inertia loads with the trapezoidal intensity are defined. The obtained approximation matrixes establish the dependence between the force vector in any cross-section of the element and the force vector in calculated cross-sections, as well as allow defining the physical characteristics of the element, i.e., compliance matrix of discrete elements. Hence, the compliance matrixes of an element under the action of distributed inertial loads of trapezoidal shape along the axis of the element are determined. The internal loads of each continual link are unambiguously determined by a set of internal loads in its separate cross-sections and by the approximation matrixes. Therefore, the task is reduced to the calculation of internal forces in a final number of cross-sections of elements. Consequently, it leads to a discrete model of elastic calculation of links of rod mechanisms. The discrete model of the elements of mechanisms and robotic systems and their discrete model as a whole are constructed. The dynamic equilibrium equations for the discrete model of the elements are also received in this work as well as the equilibrium equations of the pin and rigid joints expressed through required parameters of internal forces. Obtained systems of dynamic equilibrium equations are sufficient for the definition of internal forces in links of mechanisms, which structure is statically definable. For determination of internal forces of statically indeterminate mechanisms (in the way of determination of internal forces), it is necessary to build a compliance matrix for the entire discrete model of the rod mechanism, that is reached in this work. As a result by means of developed technique the programs in the MAPLE18 system are made and animations of the motion of the fourth class mechanisms of statically determinate and statically indeterminate structures with construction on links the intensity of cross and axial distributed inertial loads, the bending moments, cross and axial forces, depending on kinematic characteristics of links are obtained.Keywords: distributed inertial forces, internal forces, statically determinate mechanisms, statically indeterminate mechanisms
Procedia PDF Downloads 217201 Assessment of On-Site Solar and Wind Energy at a Manufacturing Facility in Ireland
Authors: A. Sgobba, C. Meskell
Abstract:
The feasibility of on-site electricity production from solar and wind and the resulting load management for a specific manufacturing plant in Ireland are assessed. The industry sector accounts directly and indirectly for a high percentage of electricity consumption and global greenhouse gas emissions; therefore, it will play a key role in emission reduction and control. Manufacturing plants, in particular, are often located in non-residential areas since they require open spaces for production machinery, parking facilities for the employees, appropriate routes for supply and delivery, special connections to the national grid and other environmental impacts. Since they have larger spaces compared to commercial sites in urban areas, they represent an appropriate case study for evaluating the technical and economic viability of energy system integration with low power density technologies, such as solar and wind, for on-site electricity generation. The available open space surrounding the analysed manufacturing plant can be efficiently used to produce a discrete quantity of energy, instantaneously and locally consumed. Therefore, transmission and distribution losses can be reduced. The usage of storage is not required due to the high and almost constant electricity consumption profile. The energy load of the plant is identified through the analysis of gas and electricity consumption, both internally monitored and reported on the bills. These data are not often recorded and available to third parties since manufacturing companies usually keep track only of the overall energy expenditures. The solar potential is modelled for a period of 21 years based on global horizontal irradiation data; the hourly direct and diffuse radiation and the energy produced by the system at the optimum pitch angle are calculated. The model is validated using PVWatts and SAM tools. Wind speed data are available for the same period within one-hour step at a height of 10m. Since the hub of a typical wind turbine reaches a higher altitude, complementary data for a different location at 50m have been compared, and a model for the estimate of wind speed at the required height in the right location is defined. Weibull Statistical Distribution is used to evaluate the wind energy potential of the site. The results show that solar and wind energy are, as expected, generally decoupled. Based on the real case study, the percentage of load covered every hour by on-site generation (Level of Autonomy LA) and the resulting electricity bought from the grid (Expected Energy Not Supplied EENS) are calculated. The economic viability of the project is assessed through Net Present Value, and the influence the main technical and economic parameters have on NPV is presented. Since the results show that the analysed renewable sources can not provide enough electricity, the integration with a cogeneration technology is studied. Finally, the benefit to energy system integration of wind, solar and a cogeneration technology is evaluated and discussed.Keywords: demand, energy system integration, load, manufacturing, national grid, renewable energy sources
Procedia PDF Downloads 129200 Floating Building Potential for Adaptation to Rising Sea Levels: Development of a Performance Based Building Design Framework
Authors: Livia Calcagni
Abstract:
Most of the largest cities in the world are located in areas that are vulnerable to coastal erosion and flooding, both linked to climate change and rising sea levels (RSL). Nevertheless, more and more people are moving to these vulnerable areas as cities keep growing. Architects, engineers and policy makers are called to rethink the way we live and to provide timely and adequate responses not only by investigating measures to improve the urban fabric, but also by developing strategies capable of planning change, exploring unusual and resilient frontiers of living, such as floating architecture. Since the beginning of the 21st century we have seen a dynamic growth of water-based architecture. At the same time, the shortage of land available for urban development also led to reclaim the seabed or to build floating structures. In light of these considerations, time is ripe to consider floating architecture not only as a full-fledged building typology but especially as a full-fledged adaptation solution for RSL. Currently, there is no global international legal framework for urban development on water and there is no structured performance based building design (PBBD) approach for floating architecture in most countries, let alone national regulatory systems. Thus, the research intends to identify the technological, morphological, functional, economic, managerial requirements that must be considered in a the development of the PBBD framework conceived as a meta-design tool. As it is expected that floating urban development is mostly likely to take place as extension of coastal areas, the needs and design criteria are definitely more similar to those of the urban environment than of the offshore industry. Therefor, the identification and categorization of parameters takes the urban-architectural guidelines and regulations as the starting point, taking the missing aspects, such as hydrodynamics, from the offshore and shipping regulatory frameworks. This study is carried out through an evidence-based assessment of performance guidelines and regulatory systems that are effective in different countries around the world addressing on-land and on-water architecture as well as offshore and shipping industries. It involves evidence-based research and logical argumentation methods. Overall, this paper highlights how inhabiting water is not only a viable response to the problem of RSL, thus a resilient frontier for urban development, but also a response to energy insecurity, clean water and food shortages, environmental concerns and urbanization, in line with Blue Economy principles and the Agenda 2030. Moreover, the discipline of architecture is presented as a fertile field for investigating solutions to cope with climate change and its effects on life safety and quality. Future research involves the development of a decision support system as an information tool to guide the user through the decision-making process, emphasizing the logical interaction between the different potential choices, based on the PBBD.Keywords: adaptation measures, floating architecture, performance based building design, resilient architecture, rising sea levels
Procedia PDF Downloads 86199 Fault Diagnosis and Fault-Tolerant Control of Bilinear-Systems: Application to Heating, Ventilation, and Air Conditioning Systems in Multi-Zone Buildings
Authors: Abderrhamane Jarou, Dominique Sauter, Christophe Aubrun
Abstract:
Over the past decade, the growing demand for energy efficiency in buildings has attracted the attention of the control community. Failures in HVAC (heating, ventilation and air conditioning) systems in buildings can have a significant impact on the desired and expected energy performance of buildings and on the user's comfort as well. FTC is a recent technology area that studies the adaptation of control algorithms to faulty operating conditions of a system. The application of Fault-Tolerant Control (FTC) in HVAC systems has gained attention in the last two decades. The objective is to maintain the variations in system performance due to faults within an acceptable range with respect to the desired nominal behavior. This paper considers the so-called active approach, which is based on fault and identification scheme combined with a control reconfiguration algorithm that consists in determining a new set of control parameters so that the reconfigured performance is "as close as possible, "in some sense, to the nominal performance. Thermal models of buildings and their HVAC systems are described by non-linear (usually bi-linear) equations. Most of the works carried out so far in FDI (fault diagnosis and isolation) or FTC consider a linearized model of the studied system. However, this model is only valid in a reduced range of variation. This study presents a new fault diagnosis (FD) algorithm based on a bilinear observer for the detection and accurate estimation of the magnitude of the HVAC system failure. The main contribution of the proposed FD algorithm is that instead of using specific linearized models, the algorithm inherits the structure of the actual bilinear model of the building thermal dynamics. As an immediate consequence, the algorithm is applicable to a wide range of unpredictable operating conditions, i.e., weather dynamics, outdoor air temperature, zone occupancy profile. A bilinear fault detection observer is proposed for a bilinear system with unknown inputs. The residual vector in the observer design is decoupled from the unknown inputs and, under certain conditions, is made sensitive to all faults. Sufficient conditions are given for the existence of the observer and results are given for the explicit computation of observer design matrices. Dedicated observer schemes (DOS) are considered for sensor FDI while unknown input bilinear observers are considered for actuator or system components FDI. The proposed strategy for FTC works as follows: At a first level, FDI algorithms are implemented, making it also possible to estimate the magnitude of the fault. Once the fault is detected, the fault estimation is then used to feed the second level and reconfigure the control low so that that expected performances are recovered. This paper is organized as follows. A general structure for fault-tolerant control of buildings is first presented and the building model under consideration is introduced. Then, the observer-based design for Fault Diagnosis of bilinear systems is studied. The FTC approach is developed in Section IV. Finally, a simulation example is given in Section V to illustrate the proposed method.Keywords: bilinear systems, fault diagnosis, fault-tolerant control, multi-zones building
Procedia PDF Downloads 172198 Performance Parameters of an Abbreviated Breast MRI Protocol
Authors: Andy Ho
Abstract:
Breast cancer is a common cancer in Australia. Early diagnosis is crucial for improving patient outcomes, as later-stage detection correlates with poorer prognoses. While multiparametric MRI offers superior sensitivity in detecting invasive and high-grade breast cancers compared to conventional mammography, its extended scan duration and high costs limit widespread application. As a result, full protocol MRI screening is typically reserved for patients at elevated risk. Recent advancements in imaging technology have facilitated the development of Abbreviated MRI protocols, which dramatically reduce scan times (<10 minutes compared to >30 minutes for full protocol). The potential for Abbreviated MRI to offer a more time- and cost-efficient alternative has implications for improving patient accessibility, reducing appointment durations, and enhancing compliance—especially relevant for individuals requiring regular annual screening over several decades. The purpose of this study is to assess the diagnostic efficacy of Abbreviated MRI for breast cancer screening among high-risk patients at the Royal Prince Alfred Hospital (RPA). This study aims to determine the sensitivity, specificity, and inter-reader variability of Abbreviated MRI protocols when interpreted by subspecialty-trained Breast Radiologists. A systematic review of the RPA’s electronic Picture Archive and Communication System identified high-risk patients, defined by Australian ‘Medicare Benefits Schedule’ criteria, who underwent Breast MRI from 2021 to 2022. Eligible participants included asymptomatic patients under 50 years old and referred by the High-Risk Clinic due to a high-risk genetic profile or relevant familial history. The MRIs were anonymized, randomized, and interpreted by four Breast Radiologists, each independently completing standardized proforma evaluations. Radiological findings were compared against histopathology as the gold standard or follow-up imaging if biopsies were unavailable. Statistical metrics, including sensitivity, specificity, and inter-reader variability, were assessed. The Fleiss-Kappa analysis demonstrated a fair inter-reader agreement (kappa = 0.25; 95% CI: 0.19–0.32; p < 0.0001). The sensitivity for detecting malignancies was 0.72, with a specificity of 0.92. For benign lesions, sensitivity and specificity were 0.844 and 0.73, respectively. These findings underline the potential of Abbreviated MRI as a reliable screening tool for malignancies with significant specificity, though reduced sensitivity highlights the importance of robust radiologist training and consistent evaluation standards. Abbreviated MRI protocols exhibit promise as a viable screening option for high-risk patients, combining reduced scan times and acceptable diagnostic accuracy. Further work to refine interpretation practices and optimize training is essential to maximize the protocol’s utility in routine clinical screening and facilitate broader accessibility.Keywords: abbreviated, breast, cancer, MRI
Procedia PDF Downloads 10197 Effect of Endurance Training on Serum Chemerin Levels and Lipid Profile of Plasma in Obese Women
Authors: A. Moghadasein, M. Ghasemi, S. Fazelifar
Abstract:
Aim: Chemerin is a novel adipokine that play an important role in regulating lipid metabolism and abiogenesis. Chemerin is dependent on autocrine and paracrine signals for the differentiation and maturation of fat cells; it also regulates glucose uptake in fat cells and stimulates lipolysis. It has been reported that in adipocytes, chemerin enhances the insulin-stimulated glucose and causes the phosphorylation of tyrosine in Insulin receptor substrate. According to the studies, Chemerin may increase insulin sensitivity in adipose tissue and is largely associated with Body mass index, triglycerides, and blood pressure in those with normal glucose tolerance. There is limited information available regarding the effect of exercise training on serum chemerin concentrations. The purpose of this study was to investigate the effect of endurance training on serum chemerin levels and lipids of plasma in overweight women. Methodology: This study was a quasi-experimental research with a pre-post test design. After required examination and verification of high pressure by the physician, 22 obese subjects (age: 35.64±5.55 yr, weight: 75.62±9.30 kg, body mass index: 32.4±1.6 kg/m2) were randomly assigned to aerobic training (n= 12) and control (n= 12) groups. Participants completed a questionnaire indicating the lack of sports history during the past six months, the lack of anti-hypertension drugs use, hormone therapy, cardiovascular problems, and complete stoppage of menstrual cycle. Aerobic training was performed 3 times weekly for 8 weeks. Resting levels of chemerin plasma, metabolic parameters were measured prior to and after the intervention. The control group did not participate in any training program. In this study, ethical considerations included the complete description of the objectives to the study participants, ensuring the confidentiality of their information. Kolmogorov-Smirnov and Levin test were used for determining the normal distribution of data and homogeneity of variances, respectively. Analyze of variance with repeated measure were used to investigate the changes in the intra-group and the differences in inter-group of variables. Statistical operations were performed using SPSS 16 and the significance level of the tests was considered at P < 0.05. Results: After an 8 week aerobic training, levels of chemerin plasma were significantly decreased in aerobic trained group when compared with their control groups (p < 0.05).Concurrently, levels of HDL-c were significantly decreased (p < 0.05) whereas, levels of cholesterol, TG and LDL-c, showed no significant changes (p > 0.05). No significant correlations between chemerin levels and weight loss were observed in subjects with overweight women. Conclusion: The present study demonstrated, 8 weeks aerobic training, reduced serum chemerin concentrations in overweight women. Whereas, aerobic training exercise programmers affected the lipid profile response of obese subjects differently. However further research is warranted in order to unravel the molecular mechanism for the range of responses and the role of serum chemerin.Keywords: chemerin, aerobic training, lipid profile, obese women
Procedia PDF Downloads 489196 Improved Soil and Snow Treatment with the Rapid Update Cycle Land-Surface Model for Regional and Global Weather Predictions
Authors: Tatiana G. Smirnova, Stan G. Benjamin
Abstract:
Rapid Update Cycle (RUC) land surface model (LSM) was a land-surface component in several generations of operational weather prediction models at the National Center for Environment Prediction (NCEP) at the National Oceanic and Atmospheric Administration (NOAA). It was designed for short-range weather predictions with an emphasis on severe weather and originally was intentionally simple to avoid uncertainties from poorly known parameters. Nevertheless, the RUC LSM, when coupled with the hourly-assimilating atmospheric model, can produce a realistic evolution of time-varying soil moisture and temperature, as well as the evolution of snow cover on the ground surface. This result is possible only if the soil/vegetation/snow component of the coupled weather prediction model has sufficient skill to avoid long-term drift. RUC LSM was first implemented in the operational NCEP Rapid Update Cycle (RUC) weather model in 1998 and later in the Weather Research Forecasting Model (WRF)-based Rapid Refresh (RAP) and High-resolution Rapid Refresh (HRRR). Being available to the international WRF community, it was implemented in operational weather models in Austria, New Zealand, and Switzerland. Based on the feedback from the US weather service offices and the international WRF community and also based on our own validation, RUC LSM has matured over the years. Also, a sea-ice module was added to RUC LSM for surface predictions over the Arctic sea-ice. Other modifications include refinements to the snow model and a more accurate specification of albedo, roughness length, and other surface properties. At present, RUC LSM is being tested in the regional application of the Unified Forecast System (UFS). The next generation UFS-based regional Rapid Refresh FV3 Standalone (RRFS) model will replace operational RAP and HRRR at NCEP. Over time, RUC LSM participated in several international model intercomparison projects to verify its skill using observed atmospheric forcing. The ESM-SnowMIP was the last of these experiments focused on the verification of snow models for open and forested regions. The simulations were performed for ten sites located in different climatic zones of the world forced with observed atmospheric conditions. While most of the 26 participating models have more sophisticated snow parameterizations than in RUC, RUC LSM got a high ranking in simulations of both snow water equivalent and surface temperature. However, ESM-SnowMIP experiment also revealed some issues in the RUC snow model, which will be addressed in this paper. One of them is the treatment of grid cells partially covered with snow. RUC snow module computes energy and moisture budgets of snow-covered and snow-free areas separately by aggregating the solutions at the end of each time step. Such treatment elevates the importance of computing in the model snow cover fraction. Improvements to the original simplistic threshold-based approach have been implemented and tested both offline and in the coupled weather model. The detailed description of changes to the snow cover fraction and other modifications to RUC soil and snow parameterizations will be described in this paper.Keywords: land-surface models, weather prediction, hydrology, boundary-layer processes
Procedia PDF Downloads 88195 Optical Vortex in Asymmetric Arcs of Rotating Intensity
Authors: Mona Mihailescu, Rebeca Tudor, Irina A. Paun, Cristian Kusko, Eugen I. Scarlat, Mihai Kusko
Abstract:
Specific intensity distributions in the laser beams are required in many fields: optical communications, material processing, microscopy, optical tweezers. In optical communications, the information embedded in specific beams and the superposition of multiple beams can be used to increase the capacity of the communication channels, employing spatial modulation as an additional degree of freedom, besides already available polarization and wavelength multiplexing. In this regard, optical vortices present interest due to their potential to carry independent data which can be multiplexed at the transmitter and demultiplexed at the receiver. Also, in the literature were studied their combinations: 1) axial or perpendicular superposition of multiple optical vortices or 2) with other laser beam types: Bessel, Airy. Optical vortices, characterized by stationary ring-shape intensity and rotating phase, are achieved using computer generated holograms (CGH) obtained by simulating the interference between a tilted plane wave and a wave passing through a helical phase object. Here, we propose a method to combine information through the reunion of two CGHs. One is obtained using the helical phase distribution, characterized by its topological charge, m. The other is obtained using conical phase distribution, characterized by its radial factor, r0. Each CGH is obtained using plane wave with different tilts: km and kr for CGH generated from helical phase object and from conical phase object, respectively. These reunions of two CGHs are calculated to be phase optical elements, addressed on the liquid crystal display of a spatial light modulator, to optically process the incident beam for investigations of the diffracted intensity pattern in far field. For parallel reunion of two CGHs and high values of the ratio between km and kr, the bright ring from the first diffraction order, specific for optical vortices, is changed in an asymmetric intensity pattern: a number of circle arcs. Both diffraction orders (+1 and -1) are asymmetrical relative to each other. In different planes along the optical axis, it is observed that this asymmetric intensity pattern rotates around its centre: in the +1 diffraction order the rotation is anticlockwise and in the -1 diffraction order, the rotation is clockwise. The relation between m and r0 controls the diameter of the circle arcs and the ratio between km and kr controls the number of arcs. For perpendicular reunion of the two CGHs and low values of the ratio between km and kr, the optical vortices are multiplied and focalized in different planes, depending on the radial parameter. The first diffraction order contains information about both phase objects. It is incident on the phase masks placed at the receiver, computed using the opposite values for topological charge or for the radial parameter and displayed successively. In all, the proposed method is exploited in terms of constructive parameters, for the possibility offered by the combination of different types of beams which can be used in robust optical communications.Keywords: asymmetrical diffraction orders, computer generated holograms, conical phase distribution, optical vortices, spatial light modulator
Procedia PDF Downloads 309194 Enzymatic Determination of Limonene in Red Clover Genotypes
Authors: Andrés Quiroz, Emilio Hormazabal, Ana Mutis, Fernando Ortega, Manuel Chacón-Fuentes, Leonardo Parra
Abstract:
Red clover (Trifolium pratense L.) is an important forage species in temperate regions of the world. The main limitation of this species worldwide is a lack of persistence related to the high mortality of plants due to a complex of biotic and abiotic factors, determining a life span of two or three seasons. Because of the importance of red clover in Chile, a red clover breeding program was started at INIA Carillanca Research Center in 1989, with the main objective of improving the survival of plants, forage yield, and persistence. The main selection criteria for selecting new varieties have been based on agronomical parameters and biotic factors. The main biotic factor associated with red clover mortality in Chile is Hylastinus obscurus (Coleoptera: Curculionidae). Both larval and adults feed on the roots, causing weakening and subsequent death of clover plants. Pesticides have not been successful for controlling infestations of this root borer. Therefore, alternative strategies for controlling this pest are a high priority for red clover producers. Currently, the role of semiochemical in the interaction between H. obscurus and red clover plants has been widely studied for our group. Specifically, from the red clover foliage has been identified limonene is eliciting repellency from the root borer. Limonene is generated in the plant from two independent biosynthetic pathways, the mevalonic acid, and deoxyxylulose pathway. Mevalonate pathway enzymes are localized in the cytosol, whereas the deoxyxylulose phosphate pathway enzymes are found in plastids. In summary, limonene can be determinated by enzymatic bioassay using GPP as substrate and by limonene synthase expression. Therefore, the main objective of this work was to study genetic variation of limonene in material provided by INIA´s Red Clover breeding program. Protein extraction was carried out homogenizing 250 mg of leave tissue and suspended in 6 mL of extraction buffer (PEG 1500, PVP-30, 20 mM MgCl2 and antioxidants) and stirred on ice for 20 min. After centrifugation, aliquots of 2.5 mL were desalted on PD-10 columns, resulting in a final volume of 3.5 mL. Protein determination was performed according to Bradford with BSA as a standard. Monoterpene synthase assays were performed with 50 µL of protein extracts transferred into gas-tight 2 mL crimp seal vials after addition of 4 µL MgCl₂ and 41 µL assay buffer. The assay was started by adding 5 µL of a GPP solution. The mixture was incubated for 30 min at 40 °C. Biosynthesized limonene was quantified in a GC equipped with a chiral column and using synthetic R and S-limonene standards. The enzymatic the production of R and S-limonene from different Superqueli-Carillanca genotypes is shown in this work. Preliminary results showed significant differences in limonene content among the genotypes analyzed. These results constitute an important base for selecting genotypes with a high content of this repellent monoterpene towards H. obscurus.Keywords: head space, limonene enzymatic determination, red clover, Hylastinus obscurus
Procedia PDF Downloads 266193 Coastal Foodscapes as Nature-Based Coastal Regeneration Systems
Authors: Gulce Kanturer Yasar, Hayriye Esbah Tuncay
Abstract:
Cultivated food production systems have coexisted harmoniously with nature for thousands of years through ancient techniques. Based on this experience, experimentation, and discovery, these culturally embedded methods have evolved to sustain food production, restore ecosystems, and harmoniously adapt to nature. In this era, as we seek solutions to food security challenges, enhancing and repairing our food production systems is crucial, making them more resilient to future disasters without harming the ecosystem. Instead of unsustainable conventional systems with ongoing destructive effects, we must investigate innovative and restorative production systems that integrate ancient wisdom and technology. Whether we consider agricultural fields, pastures, forests, coastal wetland ecosystems, or lagoons, it is crucial to harness the potential of these natural resources in addressing future global challenges, fostering both socio-economic resilience and ecological sustainability through strategic organization for food production. When thoughtfully designed and managed, marine-based food production has the potential to function as a living infrastructure system that addresses social and environmental challenges despite its known adverse impacts on the environment and local economies. These areas are also stages of daily life, vibrant hubs where local culture is produced and shared, contributing to the distinctive rural character of coastal settlements and exhibiting numerous spatial expressions of public nature. When we consider the history of humanity, indigenous communities have engaged in these sustainable production practices that provide goods for food, trade, culture, and the environment for many ages. Ecosystem restoration and socio-economic resilience can be achieved by combining production techniques based on ecological knowledge developed by indigenous societies with modern technologies. Coastal lagoons are highly productive coastal features that provide various natural services and societal values. They are especially vulnerable to severe physical, ecological, and social impacts of changing, challenging global conditions because of their placement within the coastal landscape. Coastal lagoons are crucial in sustaining fisheries productivity, providing storm protection, supporting tourism, and offering other natural services that hold significant value for society. Although there is considerable literature on the physical and ecological dimensions of lagoons, much less literature focuses on their economic and social values. This study will discuss the possibilities of coastal lagoons to achieve both ecologically sustainable and socio-economically resilient while maintaining their productivity by combining local techniques and modern technologies. The case study will present Turkey’s traditional aquaculture method, "Dalyans," predominantly operated by small-scale farmers in coastal lagoons. Due to human, ecological, and economic factors, dalyans are losing their landscape characteristics and efficiency. These 1000-year-old ancient techniques, rooted in centuries of traditional and agroecological knowledge, are under threat of tourism, urbanization, and unsustainable agricultural practices. Thus, Dalyans have diminished from 29 to approximately 4-5 active Dalyans. To deal with the adverse socio-economic and ecological consequences on Turkey's coastal areas, conserving Dalyans by protecting their indigenous practices while incorporating contemporary methods is essential. This study seeks to generate scenarios that envision the potential ways protection and development can manifest within case study areas.Keywords: coastal foodscape, lagoon aquaculture, regenerative food systems, watershed food networks
Procedia PDF Downloads 75192 Optimization Of Biogas Production Using Co-digestion Feedstocks Via Anaerobic Technologhy
Authors: E Tolufase
Abstract:
The demand, high costs and health implications of using energy derived from hydrocarbon compound have necessitated the continuous search for alternative source of energy. The World energy market is facing some challenges viz: depletion of fossil fuel reserves, population explosion, lack of energy security, economic and urbanization growth and also, in Nigeria some rural areas still depend largely on wood, charcoal, kerosene, petrol among others, as the sources of their energy. To overcome these short falls in energy supply and demand, as well as taking into consideration the risks from global climate change due to effect of greenhouse gas emissions and other pollutants from fossil fuels’ combustion, brought a lot of attention on efficiently harnessing the renewable energy sources. A very promising among the renewable energy resources for a clean energy technology for power production, vehicle and domestic usage is biogas. Therefore, optimization of biogas yield and quality is imperative. Hence, this study investigated yield and quality of biogas using low cost bio-digester and combination of various feed stocks referred to as co-digestion. Batch/Discontinuous Bio-digester type was used because it was cheap, easy, plausible and appropriate for different substrates used to get the desired results. Three substrates were used; cow dung, chicken droppings and lemon grass digested in five separate 21 litre digesters, A, B, C, D, and E and the gas collection system was designed using locally available materials. For single digestion we had; cow dung, chicken droppings, lemon grass, in Bio-digesters A, B, and C respectively, the co-digested three substrates in different mixed ratio 7:1:2 in digester D and E in ratio 5:3:2. The respective feed-stocks materials were collected locally, digested and analyzed in accordance with standard procedures. They were pre-fermented for a period of 10 days before being introduced into the digesters. They were digested for a retention period of 28 days, the physiochemical parameters namely; pressure, temperature, pH, volume of the gas collector system and volume of biogas produced were all closely monitored and recorded daily. The values of pH and temperature ranged 6.0 - 8.0, and 220C- 350C respectively. For the single substrate, bio-digester A(Cow dung only) produced biogas of total volume 0.1607m3(average volume of 0.0054m3 daily),while B (Chicken droppings ) produced 0.1722m3 (average of 0.0057m3 daily) and C (lemon grass) produced 0.1035m3 (average of 0.0035m3 daily). For the co-digested substrates in bio-digester D the total biogas produced was 0.2007m³ (average volume of 0.0067m³ daily) and bio-digester E produced 0.1991m³ (average volume of 0.0066m³ daily) It’s obvious from the results, that combining different substrates gave higher yields than when a singular feed stock was used and also mixing ratio played some roles in the yield improvement. Bio-digesters D and E contained the same substrates but mixed with different ratios, but higher yield was noticed in D with mixing ratio of 7:1:2 than in E with ratio 5:3:2.Therefore, co-digestion of substrates and mixing proportions are important factors for biogas production optimization.Keywords: anaerobic, batch, biogas, biodigester, digestion, fermentation, optimization
Procedia PDF Downloads 27191 Charged Amphiphilic Polypeptide Based Micelle Hydrogel Composite for Dual Drug Release
Authors: Monika Patel, Kazuaki Matsumura
Abstract:
Synthetic hydrogels, with their unique properties such as porosity, strength, and swelling in aqueous environment, are being used in many fields from food additives to regenerative medicines, from diagnostic and pharmaceuticals to drug delivery systems (DDS). But, hydrogels also have some limitations in terms of homogeneity of drug distribution and quantity of loaded drugs. As an alternate, polymeric micelles are extensively used as DDS. With the ease of self-assembly, and distinct stability they remarkably improve the solubility of hydrophobic drugs. However, presently, combinational therapy is the need of time and so are systems which are capable of releasing more than one drug. And it is one of the major challenges towards DDS to control the release of each drug independently, which simple DDS cannot meet. In this work, we present an amphiphilic polypeptide based micelle hydrogel composite to study the dual drug release for wound healing purposes using Amphotericin B (AmpB) and Curcumin as model drugs. Firstly, two differently charged amphiphilic polypeptide chains were prepared namely, poly L-Lysine-b-poly phenyl alanine (PLL-PPA) and poly Glutamic acid-b-poly phenyl alanine (PGA-PPA) through ring opening polymerization of amino acid N-carboxyanhydride. These polymers readily self-assemble to form micelles with hydrophobic PPA block as core and hydrophilic PLL/PGA as shell with an average diameter of about 280nm. The thus formed micelles were loaded with the model drugs. The PLL-PPA micelle was loaded with curcumin and PGA-PPA was loaded with AmpB by dialysis method. Drug loaded micelles showed a slight increase in the mean diameter and were fairly stable in solution and lyophilized forms. For forming the micelles hydrogel composite, the drug loaded micelles were dissolved and were cross linked using genipin. Genipin uses the free –NH2 groups in the PLL-PPA micelles to form a hydrogel network with free PGA-PPA micelles trapped in between the 3D scaffold formed. Different composites were tested by changing the weight ratios of the both micelles and were seen to alter its resulting surface charge from positive to negative with increase in PGA-PPA ratio. The composites with high surface charge showed a burst release of drug in initial phase, were as the composites with relatively low net charge showed a sustained release. Thus the resultant surface charge of the composite can be tuned to tune its drug release profile. Also, while studying the degree of cross linking among the PLL-PPA particles for effect on dual drug release, it was seen that as the degree of crosslinking increases, an increase in the tendency to burst release the drug (AmpB) is seen in PGA-PPA particle, were as on the contrary the PLL-PPA particles showed a slower release of Curcumin with increasing the cross linking density. Thus, two different pharmacokinetic profile of drugs were seen by changing the cross linking degree. In conclusion, a unique charged amphiphilic polypeptide based micelle hydrogel composite for dual drug delivery. This composite can be finely tuned on the basis of need of drug release profiles by changing simple parameters such as composition, cross linking and pH.Keywords: amphiphilic polypeptide, dual drug release, micelle hydrogel composite, tunable DDS
Procedia PDF Downloads 207190 Vortex Generation to Model the Airflow Downstream of a Piezoelectric Fan Array
Authors: Alastair Hales, Xi Jiang, Siming Zhang
Abstract:
Numerical methods are used to generate vortices in a domain. Through considered design, two counter-rotating vortices may interact and effectively drive one another downstream. This phenomenon is comparable to the vortex interaction that occurs in a region immediately downstream from two counter-oscillating piezoelectric (PE) fan blades. PE fans are small blades clamped at one end and driven to oscillate at their first natural frequency by an extremely low powered actuator. In operation, the high oscillation amplitude and frequency generate sufficient blade tip speed through the surrounding air to create downstream air flow. PE fans are considered an ideal solution for low power hot spot cooling in a range of small electronic devices, but a single blade does not typically induce enough air flow to be considered a direct alternative to conventional air movers, such as axial fans. The development of face-to-face PE fan arrays containing multiple blades oscillating in counter-phase to one another is essential for expanding the range of potential PE fan applications regarding the cooling of power electronics. Even in an unoptimised state, these arrays are capable of moving air volumes comparable to axial fans with less than 50% of the power demand. Replicating the airflow generated by face-to-face PE fan arrays without including the actual blades in the model reduces the process’s computational demands and enhances the rate of innovation and development in the field. Vortices are generated at a defined inlet using a time-dependent velocity profile function, which pulsates the inlet air velocity magnitude. This induces vortex generation in the considered domain, and these vortices are shown to separate and propagate downstream in a regular manner. The generation and propagation of a single vortex are compared to an equivalent vortex generated from a PE fan blade in a previous experimental investigation. Vortex separation is found to be accurately replicated in the present numerical model. Additionally, the downstream trajectory of the vortices’ centres vary by just 10.5%, and size and strength of the vortices differ by a maximum of 10.6%. Through non-dimensionalisation, the numerical method is shown to be valid for PE fan blades with differing parameters to the specific case investigated. The thorough validation methods presented verify that the numerical model may be used to replicate vortex formation from an oscillating PE fans blade. An investigation is carried out to evaluate the effects of varying the distance between two PE fan blade, pitch. At small pitch, the vorticity in the domain is maximised, along with turbulence in the near vicinity of the inlet zones. It is proposed that face-to-face PE fan arrays, oscillating in counter-phase, should have a minimal pitch to optimally cool nearby heat sources. On the other hand, downstream airflow is maximised at a larger pitch, where the vortices can fully form and effectively drive one another downstream. As such, this should be implemented when bulk airflow generation is the desired result.Keywords: piezoelectric fans, low energy cooling, vortex formation, computational fluid dynamics
Procedia PDF Downloads 182189 Investigations on the Application of Avalanche Simulations: A Survey Conducted among Avalanche Experts
Authors: Korbinian Schmidtner, Rudolf Sailer, Perry Bartelt, Wolfgang Fellin, Jan-Thomas Fischer, Matthias Granig
Abstract:
This study focuses on the evaluation of snow avalanche simulations, based on a survey that has been carried out among avalanche experts. In the last decades, the application of avalanche simulation tools has gained recognition within the realm of hazard management. Traditionally, avalanche runout models were used to predict extreme avalanche runout and prepare avalanche maps. This has changed rather dramatically with the application of numerical models. For safety regulations such as road safety simulation tools are now being coupled with real-time meteorological measurements to predict frequent avalanche hazard. That places new demands on model accuracy and requires the simulation of physical processes that previously could be ignored. These simulation tools are based on a deterministic description of the avalanche movement allowing to predict certain quantities (e.g. pressure, velocities, flow heights, runout lengths etc.) of the avalanche flow. Because of the highly variable regimes of the flowing snow, no uniform rheological law describing the motion of an avalanche is known. Therefore, analogies to fluid dynamical laws of other materials are stated. To transfer these constitutional laws to snow flows, certain assumptions and adjustments have to be imposed. Besides these limitations, there exist high uncertainties regarding the initial and boundary conditions. Further challenges arise when implementing the underlying flow model equations into an algorithm executable by a computer. This implementation is constrained by the choice of adequate numerical methods and their computational feasibility. Hence, the model development is compelled to introduce further simplifications and the related uncertainties. In the light of these issues many questions arise on avalanche simulations, on their assets and drawbacks, on potentials for improvements as well as their application in practice. To address these questions a survey among experts in the field of avalanche science (e.g. researchers, practitioners, engineers) from various countries has been conducted. In the questionnaire, special attention is drawn on the expert’s opinion regarding the influence of certain variables on the simulation result, their uncertainty and the reliability of the results. Furthermore, it was tested to which degree a simulation result influences the decision making for a hazard assessment. A discrepancy could be found between a large uncertainty of the simulation input parameters as compared to a relatively high reliability of the results. This contradiction can be explained taking into account how the experts employ the simulations. The credibility of the simulations is the result of a rather thoroughly simulation study, where different assumptions are tested, comparing the results of different flow models along with the use of supplemental data such as chronicles, field observation, silent witnesses i.a. which are regarded as essential for the hazard assessment and for sanctioning simulation results. As the importance of avalanche simulations grows within the hazard management along with their further development studies focusing on the modeling fashion could contribute to a better understanding how knowledge of the avalanche process can be gained by running simulations.Keywords: expert interview, hazard management, modeling, simulation, snow avalanche
Procedia PDF Downloads 326188 A Study of the Trap of Multi-Homing in Customers: A Comparative Case Study of Digital Payments
Authors: Shari S. C. Shang, Lynn S. L. Chiu
Abstract:
In the digital payment market, some consumers use only one payment wallet while many others play multi-homing with a variety of payment services. With the diffusion of new payment systems, we examined the determinants of the adoption of multi-homing behavior. This study aims to understand how a digital payment provider dynamically expands business touch points with cross-business strategies to enrich the digital ecosystem and avoid the trap of multi-homing in customers. By synthesizing platform ecosystem literature, we constructed a two-dimensional research framework with one determinant of user digital behavior from offline to online intentions and the other determinant of digital payment touch points from convenient accessibility to cross-business platforms. To explore on a broader scale, we selected 12 digital payments from 5 countries of UK, US, Japan, Korea, and Taiwan. With the interplays of user digital behaviors and payment touch points, we group the study cases into four types: (1) Channel Initiated: users originated from retailers with high access to in-store shopping with face-to-face guidance for payment adoption. Providers offer rewards for customer loyalty and secure the retailer’s efficient cash flow management. (2) Social Media Dependent: users usually are digital natives with high access to social media or the internet who shop and pay digitally. Providers might not own physical or online shops but are licensed to aggregate money flows through virtual ecosystems. (3) Early Life Engagement: digital banks race to capture the next generation from popularity to profitability. This type of payment aimed to give children a taste of financial freedom while letting parents track their spending. Providers are to capitalize on the digital payment and e-commerce boom and hold on to new customers into adulthood. (4) Traditional Banking: plastic credit cards are purposely designed as a control group to track the evolvement of business strategies in digital payments. Traditional credit card users may follow the bank’s digital strategy to land on different types of digital wallets or mostly keep using plastic credit cards. This research analyzed business growth models and inter-firms’ coopetition strategies of the selected cases. Results of the multiple case analysis reveal that channel initiated payments bundled rewards with retailer’s business discount for recurring purchases. They also extended other financial services, such as insurance, to fulfill customers’ new demands. Contrastively, social media dependent payments developed new usages and new value creation, such as P2P money transfer through network effects among the virtual social ties, while early life engagements offer virtual banking products to children who are digital natives but overlooked by incumbents. It has disrupted the banking business domains in preparation for the metaverse economy. Lastly, the control group of traditional plastic credit cards has gradually converted to a BaaS (banking as a service) model depending on customers’ preferences. The multi-homing behavior is not avoidable in digital payment competitions. Payment providers may encounter multiple waves of a multi-homing threat after a short period of success. A dynamic cross-business collaboration strategy should be explored to continuously evolve the digital ecosystems and allow users for a broader shopping experience and continual usage.Keywords: digital payment, digital ecosystems, multihoming users, cross business strategy, user digital behavior intentions
Procedia PDF Downloads 158187 Tailoring Piezoelectricity of PVDF Fibers with Voltage Polarity and Humidity in Electrospinning
Authors: Piotr K. Szewczyk, Arkadiusz Gradys, Sungkyun Kim, Luana Persano, Mateusz M. Marzec, Oleksander Kryshtal, Andrzej Bernasik, Sohini Kar-Narayan, Pawel Sajkiewicz, Urszula Stachewicz
Abstract:
Piezoelectric polymers have received great attention in smart textiles, wearables, and flexible electronics. Their potential applications range from devices that could operate without traditional power sources, through self-powering sensors, up to implantable biosensors. Semi-crystalline PVDF is often proposed as the main candidate for industrial-scale applications as it exhibits exceptional energy harvesting efficiency compared to other polymers combined with high mechanical strength and thermal stability. Plenty of approaches have been proposed for obtaining PVDF rich in the desired β-phase with electric polling, thermal annealing, and mechanical stretching being the most prevalent. Electrospinning is a highly tunable technique that provides a one-step process of obtaining highly piezoelectric PVDF fibers without the need for post-treatment. In this study, voltage polarity and relative humidity influence on electrospun PVDF, fibers were investigated with the main focus on piezoelectric β-phase contents and piezoelectric performance. Morphology and internal structure of fibers were investigated using scanning (SEM) and transmission electron microscopy techniques (TEM). Fourier Transform Infrared Spectroscopy (FITR), wide-angle X-ray scattering (WAXS) and differential scanning calorimetry (DSC) were used to characterize the phase composition of electrospun PVDF. Additionally, surface chemistry was verified with X-ray photoelectron spectroscopy (XPS). Piezoelectric performance of individual electrospun PVDF fibers was measured using piezoresponse force microscopy (PFM), and the power output from meshes was analyzed via custom-built equipment. To prepare the solution for electrospinning, PVDF pellets were dissolved in dimethylacetamide and acetone solution in a 1:1 ratio to achieve a 24% solution. Fibers were electrospun with a constant voltage of +/-15kV applied to the stainless steel nozzle with the inner diameter of 0.8mm. The flow rate was kept constant at 6mlh⁻¹. The electrospinning of PVDF was performed at T = 25°C and relative humidity of 30 and 60% for PVDF30+/- and PVDF60+/- samples respectively in the environmental chamber. The SEM and TEM analysis of fibers produced at a lower relative humidity of 30% (PVDF30+/-) showed a smooth surface in opposition to fibers obtained at 60% relative humidity (PVDF60+/-), which had wrinkled surface and additionally internal voids. XPS results confirmed lower fluorine content at the surface of PVDF- fibers obtained by electrospinning with negative voltage polarity comparing to the PVDF+ obtained with positive voltage polarity. Changes in surface composition measured with XPS were found to influence the piezoelectric performance of obtained fibers what was further confirmed by PFM as well as by custom-built fiber-based piezoelectric generator. For PVDF60+/- samples humidity led to an increase of β-phase contents in PVDF fibers as confirmed by FTIR, WAXS, and DSC measurements, which showed almost two times higher concentrations of β-phase. A combination of negative voltage polarity with high relative humidity led to fibers with the highest β-phase contents and the best piezoelectric performance of all investigated samples. This study outlines the possibility to produce electrospun PVDF fibers with tunable piezoelectric performance in a one-step electrospinning process by controlling relative humidity and voltage polarity conditions. Acknowledgment: This research was conducted within the funding from m the Sonata Bis 5 project granted by National Science Centre, No 2015/18/E/ST5/00230, and supported by the infrastructure at International Centre of Electron Microscopy for Materials Science (IC-EM) at AGH University of Science and Technology. The PFM measurements were supported by an STSM Grant from COST Action CA17107.Keywords: crystallinity, electrospinning, PVDF, voltage polarity
Procedia PDF Downloads 134186 Train Timetable Rescheduling Using Sensitivity Analysis: Application of Sobol, Based on Dynamic Multiphysics Simulation of Railway Systems
Authors: Soha Saad, Jean Bigeon, Florence Ossart, Etienne Sourdille
Abstract:
Developing better solutions for train rescheduling problems has been drawing the attention of researchers for decades. Most researches in this field deal with minor incidents that affect a large number of trains due to cascading effects. They focus on timetables, rolling stock and crew duties, but do not take into account infrastructure limits. The present work addresses electric infrastructure incidents that limit the power available for train traction, and hence the transportation capacity of the railway system. Rescheduling is needed in order to optimally share the available power among the different trains. We propose a rescheduling process based on dynamic multiphysics railway simulations that include the mechanical and electrical properties of all the system components and calculate physical quantities such as the train speed profiles, voltage along the catenary lines, temperatures, etc. The optimization problem to solve has a large number of continuous and discrete variables, several output constraints due to physical limitations of the system, and a high computation cost. Our approach includes a phase of sensitivity analysis in order to analyze the behavior of the system and help the decision making process and/or more precise optimization. This approach is a quantitative method based on simulation statistics of the dynamic railway system, considering a predefined range of variation of the input parameters. Three important settings are defined. Factor prioritization detects the input variables that contribute the most to the outputs variation. Then, factor fixing allows calibrating the input variables which do not influence the outputs. Lastly, factor mapping is used to study which ranges of input values lead to model realizations that correspond to feasible solutions according to defined criteria or objectives. Generalized Sobol indexes are used for factor prioritization and factor fixing. The approach is tested in the case of a simple railway system, with a nominal traffic running on a single track line. The considered incident is the loss of a feeding power substation, which limits the power available and the train speed. Rescheduling is needed and the variables to be adjusted are the trains departure times, train speed reduction at a given position and the number of trains (cancellation of some trains if needed). The results show that the spacing between train departure times is the most critical variable, contributing to more than 50% of the variation of the model outputs. In addition, we identify the reduced range of variation of this variable which guarantees that the output constraints are respected. Optimal solutions are extracted, according to different potential objectives: minimizing the traveling time, the train delays, the traction energy, etc. Pareto front is also built.Keywords: optimization, rescheduling, railway system, sensitivity analysis, train timetable
Procedia PDF Downloads 399185 Landslide Hazard Assessment Using Physically Based Mathematical Models in Agricultural Terraces at Douro Valley in North of Portugal
Authors: C. Bateira, J. Fernandes, A. Costa
Abstract:
The Douro Demarked Region (DDR) is a production Porto wine region. On the NE of Portugal, the strong incision of the Douro valley developed very steep slopes, organized with agriculture terraces, have experienced an intense and deep transformation in order to implement the mechanization of the work. The old terrace system, based on stone vertical wall support structure, replaced by terraces with earth embankments experienced a huge terrace instability. This terrace instability has important economic and financial consequences on the agriculture enterprises. This paper presents and develops cartographic tools to access the embankment instability and identify the area prone to instability. The priority on this evaluation is related to the use of physically based mathematical models and develop a validation process based on an inventory of the past embankment instability. We used the shallow landslide stability model (SHALSTAB) based on physical parameters such us cohesion (c’), friction angle(ф), hydraulic conductivity, soil depth, soil specific weight (ϱ), slope angle (α) and contributing areas by Multiple Flow Direction Method (MFD). A terraced area can be analysed by this models unless we have very detailed information representative of the terrain morphology. The slope angle and the contributing areas depend on that. We can achieve that propose using digital elevation models (DEM) with great resolution (pixel with 40cm side), resulting from a set of photographs taken by a flight at 100m high with pixel resolution of 12cm. The slope angle results from this DEM. In the other hand, the MFD contributing area models the internal flow and is an important element to define the spatial variation of the soil saturation. That internal flow is based on the DEM. That is supported by the statement that the interflow, although not coincident with the superficial flow, have important similitude with it. Electrical resistivity monitoring values which related with the MFD contributing areas build from a DEM of 1m resolution and revealed a consistent correlation. That analysis, performed on the area, showed a good correlation with R2 of 0,72 and 0,76 at 1,5m and 2m depth, respectively. Considering that, a DEM with 1m resolution was the base to model the real internal flow. Thus, we assumed that the contributing area of 1m resolution modelled by MFD is representative of the internal flow of the area. In order to solve this problem we used a set of generalized DEMs to build the contributing areas used in the SHALSTAB. Those DEMs, with several resolutions (1m and 5m), were built from a set of photographs with 50cm resolution taken by a flight with 5km high. Using this maps combination, we modelled several final maps of terrace instability and performed a validation process with the contingency matrix. The best final instability map resembles the slope map from a DEM of 40cm resolution and a MFD map from a DEM of 1m resolution with a True Positive Rate (TPR) of 0,97, a False Positive Rate of 0,47, Accuracy (ACC) of 0,53, Precision (PVC) of 0,0004 and a TPR/FPR ratio of 2,06.Keywords: agricultural terraces, cartography, landslides, SHALSTAB, vineyards
Procedia PDF Downloads 177184 Developing a Deep Understanding of the Immune Response in Hepatitis B Virus Infected Patients Using a Knowledge Driven Approach
Authors: Hanan Begali, Shahi Dost, Annett Ziegler, Markus Cornberg, Maria-Esther Vidal, Anke R. M. Kraft
Abstract:
Chronic hepatitis B virus (HBV) infection can be treated with nucleot(s)ide analog (NA), for example, which inhibits HBV replication. However, they have hardly any influence on the functional cure of HBV, which is defined by hepatitis B surface antigen (HBsAg) loss. NA needs to be taken life-long, which is not available for all patients worldwide. Additionally, NA-treated patients are still at risk of developing cirrhosis, liver failure, or hepatocellular carcinoma (HCC). Although each patient has the same components of the immune system, immune responses vary between patients. Therefore, a deeper understanding of the immune response against HBV in different patients is necessary to understand the parameters leading to HBV cure and to use this knowledge to optimize HBV therapies. This requires seamless integration of an enormous amount of diverse and fine-grained data from viral markers, e.g., hepatitis B core-related antigen (HBcrAg) and hepatitis B surface antigen (HBsAg). The data integration system relies on the assumption that profiling human immune systems requires the analysis of various variables (e.g., demographic data, treatments, pre-existing conditions, immune cell response, or HLA-typing) rather than only one. However, the values of these variables are collected independently. They are presented in a myriad of formats, e.g., excel files, textual descriptions, lab book notes, and images of flow cytometry dot plots. Additionally, patients can be identified differently in these analyses. This heterogeneity complicates the integration of variables, as data management techniques are needed to create a unified view in which individual formats and identifiers are transparent when profiling the human immune systems. The proposed study (HBsRE) aims at integrating heterogeneous data sets of 87 chronically HBV-infected patients, e.g., clinical data, immune cell response, and HLA-typing, with knowledge encoded in biomedical ontologies and open-source databases into a knowledge-driven framework. This new technique enables us to harmonize and standardize heterogeneous datasets in the defined modeling of the data integration system, which will be evaluated in the knowledge graph (KG). KGs are data structures that represent the knowledge and data as factual statements using a graph data model. Finally, the analytic data model will be applied on top of KG in order to develop a deeper understanding of the immune profiles among various patients and to evaluate factors playing a role in a holistic profile of patients with HBsAg level loss. Additionally, our objective is to utilize this unified approach to stratify patients for new effective treatments. This study is developed in the context of the project “Transforming big data into knowledge: for deep immune profiling in vaccination, infectious diseases, and transplantation (ImProVIT)”, which is a multidisciplinary team composed of computer scientists, infection biologists, and immunologists.Keywords: chronic hepatitis B infection, immune response, knowledge graphs, ontology
Procedia PDF Downloads 108183 A Mathematical Model for Studying Landing Dynamics of a Typical Lunar Soft Lander
Authors: Johns Paul, Santhosh J. Nalluveettil, P. Purushothaman, M. Premdas
Abstract:
Lunar landing is one of the most critical phases of lunar mission. The lander is provided with a soft landing system to prevent structural damage of lunar module by absorbing the landing shock and also assure stability during landing. Presently available software are not capable to simulate the rigid body dynamics coupled with contact simulation and elastic/plastic deformation analysis. Hence a separate mathematical model has been generated for studying the dynamics of a typical lunar soft lander. Parameters used in the analysis includes lunar surface slope, coefficient of friction, initial touchdown velocity (vertical and horizontal), mass and moment of inertia of lander, crushing force due to energy absorbing material in the legs, number of legs and geometry of lander. The mathematical model is capable to simulate plastic and elastic deformation of honey comb, frictional force between landing leg and lunar soil, surface contact simulation, lunar gravitational force, rigid body dynamics and linkage dynamics of inverted tripod landing gear. The non linear differential equations generated for studying the dynamics of lunar lander is solved by numerical method. Matlab programme has been used as a computer tool for solving the numerical equations. The position of each kinematic joint is defined by mathematical equations for the generation of equation of motion. All hinged locations are defined by position vectors with respect to body fixed coordinate. The vehicle rigid body rotations and motions about body coordinate are only due to the external forces and moments arise from footpad reaction force due to impact, footpad frictional force and weight of vehicle. All these force are mathematically simulated for the generation of equation of motion. The validation of mathematical model is done by two different phases. First phase is the validation of plastic deformation of crushable elements by employing conservation of energy principle. The second phase is the validation of rigid body dynamics of model by simulating a lander model in ADAMS software after replacing the crushable elements to elastic spring element. Simulation of plastic deformation along with rigid body dynamics and contact force cannot be modeled in ADAMS. Hence plastic element of primary strut is replaced with a spring element and analysis is carried out in ADAMS software. The same analysis is also carried out using the mathematical model where the simulation of honeycomb crushing is replaced by elastic spring deformation and compared the results with ADAMS analysis. The rotational motion of linkages and 6 degree of freedom motion of lunar Lander about its CG can be validated by ADAMS software by replacing crushing element to spring element. The model is also validated by the drop test results of 4 leg lunar lander. This paper presents the details of mathematical model generated and its validation.Keywords: honeycomb, landing leg tripod, lunar lander, primary link, secondary link
Procedia PDF Downloads 351182 Increase in the Shelf Life Anchovy (Engraulis ringens) from Flaying then Bleeding in a Sodium Citrate Solution
Authors: Santos Maza, Enzo Aldoradin, Carlos Pariona, Eliud Arpi, Maria Rosales
Abstract:
The objective of this study was to investigate the effect of flaying then bleeding anchovy (Engraulis ringens) immersed within a sodium citrate solution. Anchovy is a pelagic fish that readily deteriorates due to its high content of polyunsaturated fatty acids. As such, within the Peruvian food industry, the shelf life of frozen anchovy is explicitly 6 months, this short duration imparts a barrier to use for direct consumption human. Thus, almost all capture of anchovy by the fishing industry is eventually used in the production of fishmeal. We offer this an alternative to its typical production process in order to increase shelf life. In the present study, 100 kg of anchovies were captured and immediately mixed with ice on ship, maintaining a high quality sensory metric (e.g., with color blue in back) while still arriving for processing less than 2 h after capture. Anchovies with fat content of 3% were immediately flayed (i.e., reducing subcutaneous fat), beheaded, gutted and bled (i.e., removing hemoglobin) by immersion in water (Control) or in a solution of 2.5% sodium citrate (treatment), then subsequently frozen at -30 °C for 8 h in 2 kg batches. Subsequent glazing and storage at -25 °C for 14 months completed the experiments parameters. The peroxide value (PV), acidity (A), fatty acid profile (FAP), thiobarbituric acid reactive substances (TBARS), heme iron (HI), pH and sensory attributes of the samples were evaluated monthly. The results of the PV, TBARS, A, pH and sensory analyses displayed significant differences (p<0.05) between treatment and control sample; where the sodium citrate treated samples showed increased preservation features. Specifically, at the beginning of the study, flayed, beheaded, gutted and bled anchovies displayed low content of fat (1.5%) with moderate amount of PV, A and TBARS, and were not rejected by sensory analysis. HI values and FAP displayed varying behavior, however, results of HI did not reveal a decreasing trend. This result is indicative of the fact that levels of iron were maintained as HI and did not convert into no heme iron, which is known to be the primary catalyst of lipid oxidation in fish. According to the FAP results, the major quantity of fatty acid was of polyunsaturated fatty acid (PFA) followed by saturated fatty acid (SFA) and then monounsaturated fatty acid (MFA). According to sensory analysis, the shelf life of flayed, beheaded and gutted anchovy (control and treatment) was 14 months. This shelf life was reached at laboratory level because high quality anchovies were used and immediately flayed, beheaded, gutted, bled and frozen. Therefore, it is possible to maintain the shelf life of anchovies for a long time. Overall, this method displayed a large increase in shelf life relative to that commonly seen for anchovies in this industry. However, these results should be extrapolated at industrial scales to propose better processing conditions and improve the quality of anchovy for direct human consumption.Keywords: citrate sodium solution, heme iron, polyunsaturated fatty acids, shelf life of frozen anchovy
Procedia PDF Downloads 294181 Quasi-Federal Structure of India: Fault-Lines Exposed in COVID-19 Pandemic
Authors: Shatakshi Garg
Abstract:
As the world continues to grapple with the COVID-19 pandemic, India, one of the most populous democratic federal developing nation, continues to report the highest active cases and deaths, as well as struggle to let its health infrastructure not succumb to the exponentially growing requirements of hospital beds, ventilators, oxygen to save thousands of lives daily at risk. In this context, the paper outlines the handling of the COVID-19 pandemic since it first hit India in January 2020 – the policy decisions taken by the Union and the State governments from the larger perspective of its federal structure. The Constitution of India adopted in 1950 enshrined the federal relations between the Union and the State governments by way of the constitutional division of revenue-raising and expenditure responsibilities. By way of the 72nd and 73rd Amendments in the Constitution, powers and functions were devolved further to the third tier, namely the local governments, with the intention of further strengthening the federal structure of the country. However, with time, several constitutional amendments have shifted the scales in favour of the union government. The paper briefly traces some of these major amendments as well as some policy decisions which made the federal relations asymmetrical. As a result, data on key fiscal parameters helps establish how the union government gained upper hand at the expense of weak state governments, reducing the local governments to mere constitutional bodies without adequate funds and fiscal autonomy to carry out the assigned functions. This quasi-federal structure of India with the union government amassing the majority of power in terms of ‘funds, functions and functionaries’ exposed the perils of weakening sub-national governments post COVID-19 pandemic. With a complex quasi-federal structure and a heterogeneous population of over 1.3 billion, the announcement of a sudden nationwide lockdown by the union government was followed by a plight of migrants struggling to reach homes safely in the absence of adequate arrangements for travel and safety-net made by the union government. With limited autonomy enjoyed by the states, they were mostly dictated by the union government on most aspects of handling the pandemic, including protocols for lockdown, re-opening post lockdown, and vaccination drive. The paper suggests that certain policy decisions like demonetization, the introduction of GST, etc., taken by the incumbent government since 2014 when they first came to power, have further weakened the states and local governments, which have amounted to catastrophic losses, both economic and human. The role of the executive, legislature and judiciary are explored to establish how all these three arms of the government have worked simultaneously to further weaken and expose the fault-lines of the federal structure of India, which has lent the nation incapacitated to handle this pandemic. The paper then suggests the urgency of re-looking at the federal structure of the country and undertaking measures that strengthen the sub-national governments and restore the federal spirit as was enshrined in the constitution to avoid mammoth human and economic losses from a pandemic of this sort.Keywords: COVID-19 pandemic, India, federal structure, economic losses
Procedia PDF Downloads 179