Search results for: carbon capture and utilization (CCU)
125 Accelerating Personalization Using Digital Tools to Drive Circular Fashion
Authors: Shamini Dhana, G. Subrahmanya VRK Rao
Abstract:
The fashion industry is advancing towards a mindset of zero waste, personalization, creativity, and circularity. The trend of upcycling clothing and materials into personalized fashion is being demanded by the next generation. There is a need for a digital tool to accelerate the process towards mass customization. Dhana’s D/Sphere fashion technology platform uses digital tools to accelerate upcycling. In essence, advanced fashion garments can be designed and developed via reuse, repurposing, recreating activities, and using existing fabric and circulating materials. The D/Sphere platform has the following objectives: to provide (1) An opportunity to develop modern fashion using existing, finished materials and clothing without chemicals or water consumption; (2) The potential for an everyday customer and designer to use the medium of fashion for creative expression; (3) A solution to address the global textile waste generated by pre- and post-consumer fashion; (4) A solution to reduce carbon emissions, water, and energy consumption with the participation of all stakeholders; (5) An opportunity for brands, manufacturers, retailers to work towards zero-waste designs and as an alternative revenue stream. Other benefits of this alternative approach include sustainability metrics, trend prediction, facilitation of disassembly and remanufacture deep learning, and hyperheuristics for high accuracy. A design tool for mass personalization and customization utilizing existing circulating materials and deadstock, targeted to fashion stakeholders will lower environmental costs, increase revenues through up to date upcycled apparel, produce less textile waste during the cut-sew-stitch process, and provide a real design solution for the end customer to be part of circular fashion. The broader impact of this technology will result in a different mindset to circular fashion, increase the value of the product through multiple life cycles, find alternatives towards zero waste, and reduce the textile waste that ends up in landfills. This technology platform will be of interest to brands and companies that have the responsibility to reduce their environmental impact and contribution to climate change as it pertains to the fashion and apparel industry. Today, over 70% of the $3 trillion fashion and apparel industry ends up in landfills. To this extent, the industry needs such alternative techniques to both address global textile waste as well as provide an opportunity to include all stakeholders and drive circular fashion with new personalized products. This type of modern systems thinking is currently being explored around the world by the private sector, organizations, research institutions, and governments. This technological innovation using digital tools has the potential to revolutionize the way we look at communication, capabilities, and collaborative opportunities amongst stakeholders in the development of new personalized and customized products, as well as its positive impacts on society, our environment, and global climate change.Keywords: circular fashion, deep learning, digital technology platform, personalization
Procedia PDF Downloads 65124 Seasonal Variability of Picoeukaryotes Community Structure Under Coastal Environmental Disturbances
Authors: Benjamin Glasner, Carlos Henriquez, Fernando Alfaro, Nicole Trefault, Santiago Andrade, Rodrigo De La Iglesia
Abstract:
A central question in ecology refers to the relative importance that local-scale variables have over community composition, when compared with regional-scale variables. In coastal environments, strong seasonal abiotic influence dominates these systems, weakening the impact of other parameters like micronutrients. After the industrial revolution, micronutrients like trace metals have increased in ocean as pollutants, with strong effects upon biotic entities and biological processes in coastal regions. Coastal picoplankton communities had been characterized as a cyanobacterial dominated fraction, but in recent years the eukaryotic component of this size fraction has gained relevance due to their high influence in carbon cycle, although, diversity patterns and responses to disturbances are poorly understood. South Pacific upwelling coastal environments represent an excellent model to study seasonal changes due to a strong influence in the availability of macro- and micronutrients between seasons. In addition, some well constrained coastal bays of this region have been subjected to strong disturbances due to trace metal inputs. In this study, we aim to compare the influence of seasonality and trace metals concentrations, on the community structure of planktonic picoeukaryotes. To describe seasonal patterns in the study area, satellite data in a 6 years time series and in-situ measurements with a traditional oceanographic approach such as CTDO equipment were performed. In addition, trace metal concentrations were analyzed trough ICP-MS analysis, for the same region. For biological data collection, field campaigns were performed in 2011-2012 and the picoplankton community was described by flow cytometry and taxonomical characterization with next-generation sequencing of ribosomal genes. The relation between the abiotic and biotic components was finally determined by multivariate statistical analysis. Our data show strong seasonal fluctuations in abiotic parameters such as photosynthetic active radiation and superficial sea temperature, with a clear differentiation of seasons. However, trace metal analysis allows identifying strong differentiation within the study area, dividing it into two zones based on trace metals concentration. Biological data indicate that there are no major changes in diversity but a significant fluctuation in evenness and community structure. These changes are related mainly with regional parameters, like temperature, but by analyzing the metal influence in picoplankton community structure, we identify a differential response of some plankton taxa to metal pollution. We propose that some picoeukaryotic plankton groups respond differentially to metal inputs, by changing their nutritional status and/or requirements under disturbances as a derived outcome of toxic effects and tolerance.Keywords: Picoeukaryotes, plankton communities, trace metals, seasonal patterns
Procedia PDF Downloads 173123 Numerical Investigation of Thermal Energy Storage Panel Using Nanoparticle Enhanced Phase Change Material for Micro-Satellites
Authors: Jelvin Tom Sebastian, Vinod Yeldho Baby
Abstract:
In space, electronic devices are constantly attacked with radiation, which causes certain parts to fail or behave in unpredictable ways. To advance the thermal controllability for microsatellites, we need a new approach and thermal control system that is smaller than that on conventional satellites and that demand no electric power. Heat exchange inside the microsatellites is not that easy as conventional satellites due to the smaller size. With slight mass gain and no electric power, accommodating heat using phase change materials (PCMs) is a strong candidate for solving micro satellites' thermal difficulty. In other words, PCMs can absorb or produce heat in the form of latent heat, changing their phase and minimalizing the temperature fluctuation around the phase change point. The main restriction for these systems is thermal conductivity weakness of common PCMs. As PCM is having low thermal conductivity, it increases the melting and solidification time, which is not suitable for specific application like electronic cooling. In order to increase the thermal conductivity nanoparticles are introduced. Adding the nanoparticles in base PCM increases the thermal conductivity. Increase in weight concentration increases the thermal conductivity. This paper numerically investigates the thermal energy storage panel with nanoparticle enhanced phase change material. Silver nanostructure have increased the thermal properties of the base PCM, eicosane. Different weight concentration (1, 2, 3.5, 5, 6.5, 8, 10%) of silver enhanced phase change material was considered. Both steady state and transient analysis was performed to compare the characteristics of nanoparticle enhanced phase material at different heat loads. Results showed that in steady state, the temperature near the front panel reduced and temperature on NePCM panel increased as the weight concentration increased. With the increase in thermal conductivity more heat was absorbed into the NePCM panel. In transient analysis, it was found that the effect of nanoparticle concentration on maximum temperature of the system was reduced as the melting point of the material reduced with increase in weight concentration. But for the heat load of maximum 20W, the model with NePCM did not attain the melting point temperature. Therefore it showed that the model with NePCM is capable of holding more heat load. In order to study the heat load capacity double the load is given, maximum of 40W was given as first half of the cycle and the other is given constant OW. Higher temperature was obtained comparing the other heat load. The panel maintained a constant temperature for a long duration according to the NePCM melting point. In both the analysis, the uniformity of temperature of the TESP was shown. Using Ag-NePCM it allows maintaining a constant peak temperature near the melting point. Therefore, by altering the weight concentration of the Ag-NePCM it is possible to create an optimum operating temperature required for the effective working of the electronics components.Keywords: carbon-fiber-reinforced polymer, micro/nano-satellite, nanoparticle phase change material, thermal energy storage
Procedia PDF Downloads 203122 Surface Roughness in the Incremental Forming of Drawing Quality Cold Rolled CR2 Steel Sheet
Authors: Zeradam Yeshiwas, A. Krishnaia
Abstract:
The aim of this study is to verify the resulting surface roughness of parts formed by the Single-Point Incremental Forming (SPIF) process for an ISO 3574 Drawing Quality Cold Rolled CR2 Steel. The chemical composition of drawing quality Cold Rolled CR2 steel is comprised of 0.12 percent of carbon, 0.5 percent of manganese, 0.035 percent of sulfur, 0.04 percent phosphorous, and the remaining percentage is iron with negligible impurities. The experiments were performed on a 3-axis vertical CNC milling machining center equipped with a tool setup comprising a fixture and forming tools specifically designed and fabricated for the process. The CNC milling machine was used to transfer the tool path code generated in Mastercam 2017 environment into three-dimensional motions by the linear incremental progress of the spindle. The blanks of Drawing Quality Cold Rolled CR2 steel sheets of 1 mm of thickness have been fixed along their periphery by a fixture and hardened high-speed steel (HSS) tools with a hemispherical tip of 8, 10 and 12mm of diameter were employed to fabricate sample parts. To investigate the surface roughness, hyperbolic-cone shape specimens were fabricated based on the chosen experimental design. The effect of process parameters on the surface roughness was studied using three important process parameters, i.e., tool diameter, feed rate, and step depth. In this study, the Taylor-Hobson Surtronic 3+ surface roughness tester profilometer was used to determine the surface roughness of the parts fabricated using the arithmetic mean deviation (Rₐ). In this instrument, a small tip is dragged across a surface while its deflection is recorded. Finally, the optimum process parameters and the main factor affecting surface roughness were found using the Taguchi design of the experiment and ANOVA. A Taguchi experiment design with three factors and three levels for each factor, the standard orthogonal array L9 (3³) was selected for the study using the array selection table. The lowest value of surface roughness is significant for surface roughness improvement. For this objective, the ‘‘smaller-the-better’’ equation was used for the calculation of the S/N ratio. The finishing roughness parameter Ra has been measured for the different process combinations. The arithmetic means deviation (Rₐ) was measured via the experimental design for each combination of the control factors by using Taguchi experimental design. Four roughness measurements were taken for a single component and the average roughness was taken to optimize the surface roughness. The lowest value of Rₐ is very important for surface roughness improvement. For this reason, the ‘‘smaller-the-better’’ Equation was used for the calculation of the S/N ratio. Analysis of the effect of each control factor on the surface roughness was performed with a ‘‘S/N response table’’. Optimum surface roughness was obtained at a feed rate of 1500 mm/min, with a tool radius of 12 mm, and with a step depth of 0.5 mm. The ANOVA result shows that step depth is an essential factor affecting surface roughness (91.1 %).Keywords: incremental forming, SPIF, drawing quality steel, surface roughness, roughness behavior
Procedia PDF Downloads 62121 Navigating the Nexus of HIV/AIDS Care: Leveraging Statistical Insight to Transform Clinical Practice and Patient Outcomes
Authors: Nahashon Mwirigi
Abstract:
The management of HIV/AIDS is a global challenge, demanding precise tools to predict disease progression and guide tailored treatment. CD4 cell count dynamics, a crucial immune function indicator, play an essential role in understanding HIV/AIDS progression and enhancing patient care through effective modeling. While several models assess disease progression, existing methods often fall short in capturing the complex, non-linear nature of HIV/AIDS, especially across diverse demographics. A need exists for models that balance predictive accuracy with clinical applicability, enabling individualized care strategies based on patient-specific progression rates. This study utilizes patient data from Kenyatta National Hospital (2003–2014) to model HIV/AIDS progression across six CD4-defined states. The Exponential, 2-Parameter Weibull, and 3-Parameter Weibull models are employed to analyze failure rates and explore progression patterns by age and gender. Model selection is based on Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) to identify models best representing disease progression variability across demographic groups. The 3-Parameter Weibull model emerges as the most effective, accurately capturing HIV/AIDS progression dynamics, particularly by incorporating delayed progression effects. This model reflects age and gender-specific variations, offering refined insights into patient trajectories and facilitating targeted interventions. One key finding is that older patients progress more slowly through CD4-defined stages, with a delayed onset of advanced stages. This suggests that older patients may benefit from extended monitoring intervals, allowing providers to optimize resources while maintaining consistent care. Recognizing slower progression in this demographic helps clinicians reduce unnecessary interventions, prioritizing care for faster-progressing groups. Gender-based analysis reveals that female patients exhibit more consistent progression, while male patients show greater variability. This highlights the need for gender-specific treatment approaches, as men may require more frequent assessments and adaptive treatment plans to address their variable progression. Tailoring treatment by gender can improve outcomes by addressing distinct risk patterns in each group. The model’s ability to account for both accelerated and delayed progression equips clinicians with a robust tool for estimating the duration of each disease stage. This supports individualized treatment planning, allowing clinicians to optimize antiretroviral therapy (ART) regimens based on demographic factors and expected disease trajectories. Aligning ART timing with specific progression patterns can enhance treatment efficacy and adherence. The model also has significant implications for healthcare systems, as its predictive accuracy enables proactive patient management, reducing the frequency of advanced-stage complications. For resource limited providers, this capability facilitates strategic intervention timing, ensuring that high-risk patients receive timely care while resources are allocated efficiently. Anticipating progression stages enhances both patient care and resource management, reinforcing the model’s value in supporting sustainable HIV/AIDS healthcare strategies. This study underscores the importance of models that capture the complexities of HIV/AIDS progression, offering insights to guide personalized, data-informed care. The 3-Parameter Weibull model’s ability to accurately reflect delayed progression and demographic risk variations presents a valuable tool for clinicians, supporting the development of targeted interventions and resource optimization in HIV/AIDS management.Keywords: HIV/AIDS progression, 3-parameter Weibull model, CD4 cell count stages, antiretroviral therapy, demographic-specific modeling
Procedia PDF Downloads 7120 Ammonia Cracking: Catalysts and Process Configurations for Enhanced Performance
Authors: Frea Van Steenweghen, Lander Hollevoet, Johan A. Martens
Abstract:
Compared to other hydrogen (H₂) carriers, ammonia (NH₃) is one of the most promising carriers as it contains 17.6 wt% hydrogen. It is easily liquefied at ≈ 9–10 bar pressure at ambient temperature. More importantly, NH₃ is a carbon-free hydrogen carrier with no CO₂ emission at final decomposition. Ammonia has a well-defined regulatory framework and a good track record regarding safety concerns. Furthermore, the industry already has an existing transport infrastructure consisting of pipelines, tank trucks and shipping technology, as ammonia has been manufactured and distributed around the world for over a century. While NH₃ synthesis and transportation technological solutions are at hand, a missing link in the hydrogen delivery scheme from ammonia is an energy-lean and efficient technology for cracking ammonia into H₂ and N₂. The most explored option for ammonia decomposition is thermo-catalytic cracking which is, by itself, the most energy-efficient approach compared to other technologies, such as plasma and electrolysis, as it is the most energy-lean and robust option. The decomposition reaction is favoured only at high temperatures (> 300°C) and low pressures (1 bar) as the thermocatalytic ammonia cracking process is faced with thermodynamic limitations. At 350°C, the thermodynamic equilibrium at 1 bar pressure limits the conversion to 99%. Gaining additional conversion up to e.g. 99.9% necessitates heating to ca. 530°C. However, reaching thermodynamic equilibrium is infeasible as a sufficient driving force is needed, requiring even higher temperatures. Limiting the conversion below the equilibrium composition is a more economical option. Thermocatalytic ammonia cracking is documented in scientific literature. Among the investigated metal catalysts (Ru, Co, Ni, Fe, …), ruthenium is known to be most active for ammonia decomposition with an onset of cracking activity around 350°C. For establishing > 99% conversion reaction, temperatures close to 600°C are required. Such high temperatures are likely to reduce the round-trip efficiency but also the catalyst lifetime because of the sintering of the supported metal phase. In this research, the first focus was on catalyst bed design, avoiding diffusion limitation. Experiments in our packed bed tubular reactor set-up showed that extragranular diffusion limitations occur at low concentrations of NH₃ when reaching high conversion, a phenomenon often overlooked in experimental work. A second focus was thermocatalyst development for ammonia cracking, avoiding the use of noble metals. To this aim, candidate metals and mixtures were deposited on a range of supports. Sintering resistance at high temperatures and the basicity of the support were found to be crucial catalyst properties. The catalytic activity was promoted by adding alkaline and alkaline earth metals. A third focus was studying the optimum process configuration by process simulations. A trade-off between conversion and favorable operational conditions (i.e. low pressure and high temperature) may lead to different process configurations, each with its own pros and cons. For example, high-pressure cracking would eliminate the need for post-compression but is detrimental for the thermodynamic equilibrium, leading to an optimum in cracking pressure in terms of energy cost.Keywords: ammonia cracking, catalyst research, kinetics, process simulation, thermodynamic equilibrium
Procedia PDF Downloads 66119 Critical Evaluation of Long Chain Hydrocarbons with Biofuel Potential from Marine Diatoms Isolated from the West Coast of India
Authors: Indira K., Valsamma Joseph, I. S. Bright
Abstract:
Introduction :Biofuels could replace fossil fuels and reduce our carbon footprint on the planet by technological advancements needed for sustainable and economic fuel production. Micro algae have proven to be a promising source to meet the current energy demand because of high lipid content and production of high biomass rapidly. Marine diatoms, which are key contributors in the biofuel sector and also play a significant role in primary productivity and ecology with high biodiversity and genetic and chemical diversity, are less well understood than other microalgae for producing hydrocarbons. Method :The marine diatom samples selected for hydrocarbon analysis were a total of eleven, out of which 9 samples were from the culture collection of NCAAH, and the remaining two of them were isolated by serial dilution method to get a pure culture from a mixed culture of microalgae obtained from the various cruise stations (350&357) FORV Sagar Sampada along the west coast of India. These diatoms were mass cultured in F/2 media, and the biomass harvested. The crude extract was obtained from the biomass by homogenising with n-hexane, and the hydrocarbons was further obtained by passing the crude extract through 500mg Bonna Agela SPE column and the quantitative analysis was done by GCHRMS analysis using HP-5 column and Helium gas was used as a carrier gas(1ml/min). The injector port temperature was 2400C, the detector temperature was 2500C, and the oven was initially kept at 600C for 1 minute and increased to 2200C at the rate of 60C per minute, and the analysis of a mixture of long chain hydrocarbons was done .Results:In the qualitative analysis done, the most potent hydrocarbon was found to be Psammodictyon Panduriforme (NCAAH-9) with a hydrocarbon mass of 37.27mg/g of the biomass and 2.1% of the total biomass 0f 1.395g and the other potent producer is Biddulphia(NCAAH 6) with hydrocarbon mass of 25.4mg/g of biomass and percentage of hydrocarbon is 1.03%. In the quantitative analysis by GCHRMS, the long chain hydrocarbons found in most of the marine diatoms were undecane, hexadecane, octadecane 3ethyl 5,2 ethyl butyl, Eicosane7hexyl, hexacosane, heptacosane, heneicosane, octadecane 3 methyl, triacontane. The exact mass of the long chain hydrocarbons in all the marine diatom samples was found to be Nonadecane 12C191H40, Tritriacontane,13-decyl-13-heptyl 12C501H102, Octadecane,3ethyl-5-(2-ethylbutyl 12C261H54, tetratetracontane 12C441H89, Eicosane, 7-hexyl 12C261H54. Conclusion:All the marine diatoms screened produced long chain hydrocarbons which can be used as diesel fuel with good cetane value example, hexadecane, undecane. All the long chain hydrocarbons can further undergo catalytic cracking to produce short chain alkanes which can give good octane values and can be used as gasoline. Optimisation of hydrocarbon production with the most potent marine diatom yielded long chain hydrocarbons of good fuel quality.Keywords: biofuel, hydrocarbons, marine diatoms, screening
Procedia PDF Downloads 76118 Design of Ultra-Light and Ultra-Stiff Lattice Structure for Performance Improvement of Robotic Knee Exoskeleton
Authors: Bing Chen, Xiang Ni, Eric Li
Abstract:
With the population ageing, the number of patients suffering from chronic diseases is increasing, among which stroke is a high incidence for the elderly. In addition, there is a gradual increase in the number of patients with orthopedic or neurological conditions such as spinal cord injuries, nerve injuries, and other knee injuries. These diseases are chronic, with high recurrence and complications, and normal walking is difficult for such patients. Nowadays, robotic knee exoskeletons have been developed for individuals with knee impairments. However, the currently available robotic knee exoskeletons are generally developed with heavyweight, which makes the patients uncomfortable to wear, prone to wearing fatigue, shortening the wearing time, and reducing the efficiency of exoskeletons. Some lightweight materials, such as carbon fiber and titanium alloy, have been used for the development of robotic knee exoskeletons. However, this increases the cost of the exoskeletons. This paper illustrates the design of a new ultra-light and ultra-stiff truss type of lattice structure. The lattice structures are arranged in a fan shape, which can fit well with circular arc surfaces such as circular holes, and it can be utilized in the design of rods, brackets, and other parts of a robotic knee exoskeleton to reduce the weight. The metamaterial is formed by continuous arrangement and combination of small truss structure unit cells, which changes the diameter of the pillar section, geometrical size, and relative density of each unit cell. It can be made quickly through additive manufacturing techniques such as metal 3D printing. The unit cell of the truss structure is small, and the machined parts of the robotic knee exoskeleton, such as connectors, rods, and bearing brackets, can be filled and replaced by gradient arrangement and non-uniform distribution. Under the condition of satisfying the mechanical properties of the robotic knee exoskeleton, the weight of the exoskeleton is reduced, and hence, the patient’s wearing fatigue is relaxed, and the wearing time of the exoskeleton is increased. Thus, the efficiency and wearing comfort, and safety of the exoskeleton can be improved. In this paper, a brief description of the hardware design of the prototype of the robotic knee exoskeleton is first presented. Next, the design of the ultra-light and ultra-stiff truss type of lattice structures is proposed, and the mechanical analysis of the single-cell unit is performed by establishing the theoretical model. Additionally, simulations are performed to evaluate the maximum stress-bearing capacity and compressive performance of the uniform arrangement and gradient arrangement of the cells. Finally, the static analysis is performed for the cell-filled rod and the unmodified rod, respectively, and the simulation results demonstrate the effectiveness and feasibility of the designed ultra-light and ultra-stiff truss type of lattice structures. In future studies, experiments will be conducted to further evaluate the performance of the designed lattice structures.Keywords: additive manufacturing, lattice structures, metamaterial, robotic knee exoskeleton
Procedia PDF Downloads 107117 Electricity Market Reforms Towards Clean Energy Transition andnd Their Impact in India
Authors: Tarun Kumar Dalakoti, Debajyoti Majumder, Aditya Prasad Das, Samir Chandra Saxena
Abstract:
India’s ambitious target to achieve a 50 percent share of energy from non-fossil fuels and the 500-gigawatt (GW) renewable energy capacity before the deadline of 2030, coupled with the global pursuit of sustainable development, will compel the nation to embark on a rapid clean energy transition. As a result, electricity market reforms will emerge as critical policy instruments to facilitate this transition and achieve ambitious environmental targets. This paper will present a comprehensive analysis of the various electricity market reforms to be introduced in the Indian Electricity sector to facilitate the integration of clean energy sources and will assess their impact on the overall energy landscape. The first section of this paper will delve into the policy mechanisms to be introduced by the Government of India and the Central Electricity Regulatory Commission to promote clean energy deployment. These mechanisms include extensive provisions for the integration of renewables in the Indian Electricity Grid Code, 2023. The section will also cover the projection of RE Generation as highlighted in the National Electricity Plan, 2023. It will discuss the introduction of Green Energy Market segments, the waiver of Inter-State Transmission System (ISTS) charges for inter-state sale of solar and wind power, the notification of Promoting Renewable Energy through Green Energy Open Access Rules, and the bundling of conventional generating stations with renewable energy sources. The second section will evaluate the tangible impact of these electricity market reforms. By drawing on empirical studies and real-world case examples, the paper will assess the penetration rate of renewable energy sources in India’s electricity markets, the decline of conventional fuel-based generation, and the consequent reduction in carbon emissions. Furthermore, it will explore the influence of these reforms on electricity prices, the impact on various market segments due to the introduction of green contracts, and grid stability. The paper will also discuss the operational challenges to be faced due to the surge of RE Generation sources as a result of the implementation of the above-mentioned electricity market reforms, including grid integration issues, intermittency concerns with renewable energy sources, and the need for increasing grid resilience for future high RE in generation mix scenarios. In conclusion, this paper will emphasize that electricity market reforms will be pivotal in accelerating the global transition towards clean energy systems. It will underscore the importance of a holistic approach that combines effective policy design, robust regulatory frameworks, and active participation from market actors. Through a comprehensive examination of the impact of these reforms, the paper will shed light on the significance of India’s sustained commitment to a cleaner, more sustainable energy future.Keywords: renewables, Indian electricity grid code, national electricity plan, green energy market
Procedia PDF Downloads 42116 Development of Alternative Fuels Technologies for Transportation
Authors: Szymon Kuczynski, Krystian Liszka, Mariusz Laciak, Andrii Oliinyk, Adam Szurlej
Abstract:
Currently, in automotive transport to power vehicles, almost exclusively hydrocarbon based fuels are used. Due to increase of hydrocarbon fuels consumption, quality parameters are tightend for clean environment. At the same time efforts are undertaken for development of alternative fuels. The reasons why looking for alternative fuels for petroleum and diesel are: to increase vehicle efficiency and to reduce the environmental impact, reduction of greenhouse gases emissions and savings in consumption of limited oil resources. Significant progress was performed on development of alternative fuels such as methanol, ethanol, natural gas (CNG / LNG), LPG, dimethyl ether (DME) and biodiesel. In addition, biggest vehicle manufacturers work on fuel cell vehicles and its introduction to the market. Alcohols such as methanol and ethanol create the perfect fuel for spark-ignition engines. Their advantages are high-value antiknock which determines their application as additive (10%) to unleaded petrol and relative purity of produced exhaust gasses. Ethanol is produced in distillation process of plant products, which value as a food can be irrational. Ethanol production can be costly also for the entire economy of the country, because it requires a large complex distillation plants, large amounts of biomass and finally a significant amount of fuel to sustain the process. At the same time, the fermentation process of plants releases into the atmosphere large quantities of carbon dioxide. Natural gas cannot be directly converted into liquid fuels, although such arrangements have been proposed in the literature. Going through stage of intermediates is inevitable yet. Most popular one is conversion to methanol, which can be processed further to dimethyl ether (DME) or olefin (ethylene and propylene) for the petrochemical sector. Methanol uses natural gas as a raw material, however, requires expensive and advanced production processes. In relation to pollution emissions, the optimal vehicle fuel is LPG which is used in many countries as an engine fuel. Production of LPG is inextricably linked with production and processing of oil and gas, and which represents a small percentage. Its potential as an alternative for traditional fuels is therefore proportionately reduced. Excellent engine fuel may be biogas, however, follows to the same limitations as ethanol - the same production process is used and raw materials. Most essential fuel in the campaign of environment protection against pollution is natural gas. Natural gas as fuel may be either compressed (CNG) or liquefied (LNG). Natural gas can also be used for hydrogen production in steam reforming. Hydrogen can be used as a basic starting material for the chemical industry, an important raw material in the refinery processes, as well as a fuel vehicle transportation. Natural gas can be used as CNG which represents an excellent compromise between the availability of the technology that is proven and relatively cheap to use in many areas of the automotive industry. Natural gas can also be seen as an important bridge to other alternative sources of energy derived from fuel and harmless to the environment. For these reasons CNG as a fuel stimulates considerable interest in the worldwide.Keywords: alternative fuels, CNG (Compressed Natural Gas), LNG (Liquefied Natural Gas), NGVs (Natural Gas Vehicles)
Procedia PDF Downloads 181115 Energy Efficiency of Secondary Refrigeration with Phase Change Materials and Impact on Greenhouse Gases Emissions
Authors: Michel Pons, Anthony Delahaye, Laurence Fournaison
Abstract:
Secondary refrigeration consists of splitting large-size direct-cooling units into volume-limited primary cooling units complemented by secondary loops for transporting and distributing cold. Such a design reduces the refrigerant leaks, which represents a source of greenhouse gases emitted into the atmosphere. However, inserting the secondary circuit between the primary unit and the ‘users’ heat exchangers (UHX) increases the energy consumption of the whole process, which induces an indirect emission of greenhouse gases. It is thus important to check whether that efficiency loss is sufficiently limited for the change to be globally beneficial to the environment. Among the likely secondary fluids, phase change slurries offer several advantages: they transport latent heat, they stabilize the heat exchange temperature, and the formerly evaporators still can be used as UHX. The temperature level can also be adapted to the desired cooling application. Herein, the slurry {ice in mono-propylene-glycol solution} (melting temperature Tₘ of 6°C) is considered for food preservation, and the slurry {mixed hydrate of CO₂ + tetra-n-butyl-phosphonium-bromide in aqueous solution of this salt + CO₂} (melting temperature Tₘ of 13°C) is considered for air conditioning. For the sake of thermodynamic consistency, the analysis encompasses the whole process, primary cooling unit plus secondary slurry loop, and the various properties of the slurries, including their non-Newtonian viscosity. The design of the whole process is optimized according to the properties of the chosen slurry and under explicit constraints. As a first constraint, all the units must deliver the same cooling power to the user. The other constraints concern the heat exchanges areas, which are prescribed, and the flow conditions, which prevent deposition of the solid particles transported in the slurry, and their agglomeration. Minimization of the total energy consumption leads to the optimal design. In addition, the results are analyzed in terms of exergy losses, which allows highlighting the couplings between the primary unit and the secondary loop. One important difference between the ice-slurry and the mixed-hydrate one is the presence of gaseous carbon dioxide in the latter case. When the mixed-hydrate crystals melt in the UHX, CO₂ vapor is generated at a rate that depends on the phase change kinetics. The flow in the UHX, and its heat and mass transfer properties are significantly modified. This effect has never been investigated before. Lastly, inserting the secondary loop between the primary unit and the users increases the temperature difference between the refrigerated space and the evaporator. This results in a loss of global energy efficiency, and therefore in an increased energy consumption. The analysis shows that this loss of efficiency is not critical in the first case (Tₘ = 6°C), while the second case leads to more ambiguous results, partially because of the higher melting temperature.The consequences in terms of greenhouse gases emissions are also analyzed.Keywords: exergy, hydrates, optimization, phase change material, thermodynamics
Procedia PDF Downloads 131114 Measurement System for Human Arm Muscle Magnetic Field and Grip Strength
Authors: Shuai Yuan, Minxia Shi, Xu Zhang, Jianzhi Yang, Kangqi Tian, Yuzheng Ma
Abstract:
The precise measurement of muscle activities is essential for understanding the function of various body movements. This work aims to develop a muscle magnetic field signal detection system based on mathematical analysis. Medical research has underscored that early detection of muscle atrophy, coupled with lifestyle adjustments such as dietary control and increased exercise, can significantly enhance muscle-related diseases. Currently, surface electromyography (sEMG) is widely employed in research as an early predictor of muscle atrophy. Nonetheless, the primary limitation of using sEMG to forecast muscle strength is its inability to directly measure the signals generated by muscles. Challenges arise from potential skin-electrode contact issues due to perspiration, leading to inaccurate signals or even signal loss. Additionally, resistance and phase are significantly impacted by adipose layers. The recent emergence of optically pumped magnetometers introduces a fresh avenue for bio-magnetic field measurement techniques. These magnetometers possess high sensitivity and obviate the need for a cryogenic environment unlike superconducting quantum interference devices (SQUIDs). They detect muscle magnetic field signals in the range of tens to thousands of femtoteslas (fT). The utilization of magnetometers for capturing muscle magnetic field signals remains unaffected by issues of perspiration and adipose layers. Since their introduction, optically pumped atomic magnetometers have found extensive application in exploring the magnetic fields of organs such as cardiac and brain magnetism. The optimal operation of these magnetometers necessitates an environment with an ultra-weak magnetic field. To achieve such an environment, researchers usually utilize a combination of active magnetic compensation technology with passive magnetic shielding technology. Passive magnetic shielding technology uses a magnetic shielding device built with high permeability materials to attenuate the external magnetic field to a few nT. Compared with more layers, the coils that can generate a reverse magnetic field to precisely compensate for the residual magnetic fields are cheaper and more flexible. To attain even lower magnetic fields, compensation coils designed by Biot-Savart law are involved to generate a counteractive magnetic field to eliminate residual magnetic fields. By solving the magnetic field expression of discrete points in the target region, the parameters that determine the current density distribution on the plane can be obtained through the conventional target field method. The current density is obtained from the partial derivative of the stream function, which can be represented by the combination of trigonometric functions. Optimization algorithms in mathematics are introduced into coil design to obtain the optimal current density distribution. A one-dimensional linear regression analysis was performed on the collected data, obtaining a coefficient of determination R2 of 0.9349 with a p-value of 0. This statistical result indicates a stable relationship between the peak-to-peak value (PPV) of the muscle magnetic field signal and the magnitude of grip strength. This system is expected to be a widely used tool for healthcare professionals to gain deeper insights into the muscle health of their patients.Keywords: muscle magnetic signal, magnetic shielding, compensation coils, trigonometric functions.
Procedia PDF Downloads 57113 Estimation of State of Charge, State of Health and Power Status for the Li-Ion Battery On-Board Vehicle
Authors: S. Sabatino, V. Calderaro, V. Galdi, G. Graber, L. Ippolito
Abstract:
Climate change is a rapidly growing global threat caused mainly by increased emissions of carbon dioxide (CO₂) into the atmosphere. These emissions come from multiple sources, including industry, power generation, and the transport sector. The need to tackle climate change and reduce CO₂ emissions is indisputable. A crucial solution to achieving decarbonization in the transport sector is the adoption of electric vehicles (EVs). These vehicles use lithium (Li-Ion) batteries as an energy source, making them extremely efficient and with low direct emissions. However, Li-Ion batteries are not without problems, including the risk of overheating and performance degradation. To ensure its safety and longevity, it is essential to use a battery management system (BMS). The BMS constantly monitors battery status, adjusts temperature and cell balance, ensuring optimal performance and preventing dangerous situations. From the monitoring carried out, it is also able to optimally manage the battery to increase its life. Among the parameters monitored by the BMS, the main ones are State of Charge (SoC), State of Health (SoH), and State of Power (SoP). The evaluation of these parameters can be carried out in two ways: offline, using benchtop batteries tested in the laboratory, or online, using batteries installed in moving vehicles. Online estimation is the preferred approach, as it relies on capturing real-time data from batteries while operating in real-life situations, such as in everyday EV use. Actual battery usage conditions are highly variable. Moving vehicles are exposed to a wide range of factors, including temperature variations, different driving styles, and complex charge/discharge cycles. This variability is difficult to replicate in a controlled laboratory environment and can greatly affect performance and battery life. Online estimation captures this variety of conditions, providing a more accurate assessment of battery behavior in real-world situations. In this article, a hybrid approach based on a neural network and a statistical method for real-time estimation of SoC, SoH, and SoP parameters of interest is proposed. These parameters are estimated from the analysis of a one-day driving profile of an electric vehicle, assumed to be divided into the following four phases: (i) Partial discharge (SoC 100% - SoC 50%), (ii) Partial discharge (SoC 50% - SoC 80%), (iii) Deep Discharge (SoC 80% - SoC 30%) (iv) Full charge (SoC 30% - SoC 100%). The neural network predicts the values of ohmic resistance and incremental capacity, while the statistical method is used to estimate the parameters of interest. This reduces the complexity of the model and improves its prediction accuracy. The effectiveness of the proposed model is evaluated by analyzing its performance in terms of square mean error (RMSE) and percentage error (MAPE) and comparing it with the reference method found in the literature.Keywords: electric vehicle, Li-Ion battery, BMS, state-of-charge, state-of-health, state-of-power, artificial neural networks
Procedia PDF Downloads 67112 Ruminal Fermentation of Biologically Active Nitrate- and Nitro-Containing Forages
Authors: Robin Anderson, David Nisbet
Abstract:
Nitrate, 3-nitro-1-propionic acid (NPA) and 3-nitro-1-propanol (NPOH) are biologically active chemicals that can accumulate naturally in rangeland grasses forages consumed by grazing cattle, sheep and goats. While toxic to livestock if accumulations and amounts consumed are high enough, particularly in animals having no recent exposure to the forages, these chemicals are known to be potent inhibitors of methane-producing bacteria inhabiting the rumen. Consequently, there is interest in examining their potential use as anti-methanogenic compounds to decrease methane emissions by grazing ruminants. Presently, rumen microbes, collected freshly from a cannulated Holstein cow maintained on 50:50 corn based concentrate:alfalfa diet were mixed (10 mL fluid) in 18 x 150 mm crimp top tubes with 0.5 of high nitrate-containing barley (Hordeum vulgare; containing 272 µmol nitrate per g forage dry matter), and NPA- or NPOH- containing milkvetch forages (Astragalus canadensis and Astragalus miser containing 80 and 174 soluble µmol NPA or NPOH/g forage dry matter respectively). Incubations containing 0.5 g alfalfa (Medicago sativa) were used as controls. Tubes (3 per each respective forage) were capped and incubated anaerobically (using oxygen free carbon dioxide) for 24 h at 39oC after which time amounts of total gas produced were measured via volume displacement and headspace samples were analyzed by gas chromatography to determine concentrations of hydrogen and methane. Fluid samples were analyzed by gas chromatography to measure accumulations of fermentation acids. A completely randomized analysis of variance revealed that the nitrate-containing barley and both the NPA- and the NPOH-containing milkvetches significantly decreased methane production, by > 50%, when compared to methane produced by populations incubated similarly with alfalfa (70.4 ± 3.6 µmol/ml incubation fluid). Accumulations of hydrogen, which are typically increased when methane production is inhibited, by incubations with the nitrate-containing barley and the NPA- and NPOH-containing milkvetches did not differ from accumulations observed in the alfalfa controls (0.09 ± 0.04 µmol/mL incubation fluid). Accumulations of fermentation acids produced in the incubations containing the high-nitrate barley and the NPA- and NPOH-containing milkvetches likewise did not differ from accumulations observed in incubations containing alfalfa (123.5 ± 10.8, 36.0 ± 3.0, 17.1 ± 1.5, 3.5 ± 0.3, 2.3 ± 0.2, 2.2 ± 0.2 µmol/mL incubation fluid for acetate, propionate, butyrate, valerate, isobutyrate, and isovalerate, respectively). This finding indicates the microbial populations did not compensate for the decreased methane production via compensatory changes in production of fermentative acids. Stoichiometric estimation of fermentation balance revealed that > 77% of reducing equivalents generated during fermentation of the forages were recovered in fermentation products and the recoveries did not differ between the alfalfa incubations and those with the high-nitrate barley or the NPA- or NPOH-containing milkvetches. Stoichiometric estimates of amounts of hexose fermented similarly did not differ between the nitrate-, NPA and NPOH-containing incubations and those with the alfalfa, averaging 99.6 ± 37.2 µmol hexose consumed/mL of incubation fluid. These results suggest that forages containing nitrate, NPA or NPOH may be useful to reduce methane emissions of grazing ruminants provided risks of toxicity can be effectively managed.Keywords: nitrate, nitropropanol, nitropropionic acid, rumen methane emissions
Procedia PDF Downloads 128111 Policy Views of Sustainable Integrated Solution for Increased Synergy between Light Railways and Electrical Distribution Network
Authors: Mansoureh Zangiabadi, Shamil Velji, Rajendra Kelkar, Neal Wade, Volker Pickert
Abstract:
The EU has set itself a long-term goal of reducing greenhouse gas emissions by 80-95% of the 1990 levels by 2050 as set in the Energy Roadmap 2050. This paper reports on the European Union H2020 funded E-Lobster project which demonstrates tools and technologies, software and hardware in integrating the grid distribution, and the railway power systems with power electronics technologies (Smart Soft Open Point - sSOP) and local energy storage. In this context this paper describes the existing policies and regulatory frameworks of the energy market at European level with a special focus then at National level, on the countries where the members of the consortium are located, and where the demonstration activities will be implemented. By taking into account the disciplinary approach of E-Lobster, the main policy areas investigated includes electricity, energy market, energy efficiency, transport and smart cities. Energy storage will play a key role in enabling the EU to develop a low-carbon electricity system. In recent years, Energy Storage System (ESSs) are gaining importance due to emerging applications, especially electrification of the transportation sector and grid integration of volatile renewables. The need for storage systems led to ESS technologies performance improvements and significant price decline. This allows for opening a new market where ESSs can be a reliable and economical solution. One such emerging market for ESS is R+G management which will be investigated and demonstrated within E-Lobster project. The surplus of energy in one type of power system (e.g., due to metro braking) might be directly transferred to the other power system (or vice versa). However, it would usually happen at unfavourable instances when the recipient does not need additional power. Thus, the role of ESS is to enhance advantages coming from interconnection of the railway power systems and distribution grids by offering additional energy buffer. Consequently, the surplus/deficit of energy in, e.g. railway power systems, is not to be immediately transferred to/from the distribution grid but it could be stored and used when it is really needed. This will assure better energy management exchange between the railway power systems and distribution grids and lead to more efficient loss reduction. In this framework, to identify the existing policies and regulatory frameworks is crucial for the project activities and for the future development of business models for the E-Lobster solutions. The projections carried out by the European Commission, the Member States and stakeholders and their analysis indicated some trends, challenges, opportunities and structural changes needed to design the policy measures to provide the appropriate framework for investors. This study will be used as reference for the discussion in the envisaged workshops with stakeholders (DSOs and Transport Managers) in the E-Lobster project.Keywords: light railway, electrical distribution network, Electrical Energy Storage, policy
Procedia PDF Downloads 135110 Application of Laser-Induced Breakdown Spectroscopy for the Evaluation of Concrete on the Construction Site and in the Laboratory
Authors: Gerd Wilsch, Tobias Guenther, Tobias Voelker
Abstract:
In view of the ageing of vital infrastructure facilities, a reliable condition assessment of concrete structures is becoming of increasing interest for asset owners to plan timely and appropriate maintenance and repair interventions. For concrete structures, reinforcement corrosion induced by penetrating chlorides is the dominant deterioration mechanism affecting the serviceability and, eventually, structural performance. The determination of the quantitative chloride ingress is required not only to provide valuable information on the present condition of a structure, but the data obtained can also be used for the prediction of its future development and associated risks. At present, wet chemical analysis of ground concrete samples by a laboratory is the most common test procedure for the determination of the chloride content. As the chloride content is expressed by the mass of the binder, the analysis should involve determination of both the amount of binder and the amount of chloride contained in a concrete sample. This procedure is laborious, time-consuming, and costly. The chloride profile obtained is based on depth intervals of 10 mm. LIBS is an economically viable alternative providing chloride contents at depth intervals of 1 mm or less. It provides two-dimensional maps of quantitative element distributions and can locate spots of higher concentrations like in a crack. The results are correlated directly to the mass of the binder, and it can be applied on-site to deliver instantaneous results for the evaluation of the structure. Examples for the application of the method in the laboratory for the investigation of diffusion and migration of chlorides, sulfates, and alkalis are presented. An example for the visualization of the Li transport in concrete is also shown. These examples show the potential of the method for a fast, reliable, and automated two-dimensional investigation of transport processes. Due to the better spatial resolution, more accurate input parameters for model calculations are determined. By the simultaneous detection of elements such as carbon, chlorine, sodium, and potassium, the mutual influence of the different processes can be determined in only one measurement. Furthermore, the application of a mobile LIBS system in a parking garage is demonstrated. It uses a diode-pumped low energy laser (3 mJ, 1.5 ns, 100 Hz) and a compact NIR spectrometer. A portable scanner allows a two-dimensional quantitative element mapping. Results show the quantitative chloride analysis on wall and floor surfaces. To determine the 2-D distribution of harmful elements (Cl, C), concrete cores were drilled, split, and analyzed directly on-site. Results obtained were compared and verified with laboratory measurements. The results presented show that the LIBS method is a valuable addition to the standard procedures - the wet chemical analysis of ground concrete samples. Currently, work is underway to develop a technical code of practice for the application of the method for the determination of chloride concentration in concrete.Keywords: chemical analysis, concrete, LIBS, spectroscopy
Procedia PDF Downloads 105109 Potential of Dredged Material for CSEB in Building Structure
Authors: BoSheng Liu
Abstract:
The research goal is to re-image a locally-sourced waste product as abuilding material. The author aims to contribute to the compressed stabilized earth block (CSEB) by investigating the promising role of dredged material as an alternative building ingredient in the production of bricks and tiles. Dredged material comes from the sediment deposited near the shore or downstream, where the water current velocity decreases. This sediment needs to be dredged to provide water transportation; thus, there are mounds of the dredged material stored at bay. It is the interest of this research to reduce the filtered un-organic soil in the production of CSEB and replace it with locally dredged material from the Atchafalaya River in Morgan City, Louisiana. Technology and mechanical innovations have evolved the traditional adobe production method, which mixes the soil and natural fiber into molded bricks, into chemically stabilized CSEB made by compressing the clay mixture and stabilizer in a compression chamber with particular loads. In the case of dredged material CSEB (DM-CSEB), cement plays an essential role as the bending agent contributing to the unit strength while sustaining the filtered un-organic soil. Each DM-CSEB unit is made in a compression chamber with 580 PSI (i.e., 4 MPa) force. The research studied the cement content from 5% to 10% along with the range of dredged material mixtures, which differed from 20% to 80%. The material mixture content affected the DM-CSEB's strength and workability during and after its compression. Results indicated two optimal workabilities of the mixture: 27% fine clay content and 63% dredged material with 10% cement, or 28% fine clay content, and 67% dredged material with 5% cement. The final product of DM-CSEB emitted between 10 to 13 times fewer carbon emissions compared to the conventional fired masonry structure. DM-CSEB satisfied the strength requirement given by the ASTM C62 and ASTM C34 standards for construction material. One of the final evaluations tested and validated the material performance by designing and constructing an architectural, conical tile-vault prototype that was 28" by 40" by 24." The vault utilized a computational form-finding approach to generate the form's geometry, which optimized the correlation between the vault geometry and structural load distribution. A series of scaffolding was deployed to create the framework for the tile-vault construction. The final tile-vault structure was made from 2 layers of DM-CSEB tiles jointed by mortar, and the construction of the structure used over 110 tiles. The tile-vault prototype was capable of carrying over 400 lbs of live loads, which further demonstrated the dredged material feasibility as a construction material. The presented case study of Dredged Material Compressed Stabilized Earth Block (DM-CSEB) provides the first impression of dredged material in the clayey mixture process, structural performance, and construction practice. Overall, the approach of integrating dredged material in building material can be feasible, regionally sourced, cost-effective, and environment-friendly.Keywords: dredged material, compressed stabilized earth block, tile-vault, regionally sourced, environment-friendly
Procedia PDF Downloads 115108 Analytical and Numerical Modeling of Strongly Rotating Rarefied Gas Flows
Authors: S. Pradhan, V. Kumaran
Abstract:
Centrifugal gas separation processes effect separation by utilizing the difference in the mole fraction in a high speed rotating cylinder caused by the difference in molecular mass, and consequently the centrifugal force density. These have been widely used in isotope separation because chemical separation methods cannot be used to separate isotopes of the same chemical species. More recently, centrifugal separation has also been explored for the separation of gases such as carbon dioxide and methane. The efficiency of separation is critically dependent on the secondary flow generated due to temperature gradients at the cylinder wall or due to inserts, and it is important to formulate accurate models for this secondary flow. The widely used Onsager model for secondary flow is restricted to very long cylinders where the length is large compared to the diameter, the limit of high stratification parameter, where the gas is restricted to a thin layer near the wall of the cylinder, and it assumes that there is no mass difference in the two species while calculating the secondary flow. There are two objectives of the present analysis of the rarefied gas flow in a rotating cylinder. The first is to remove the restriction of high stratification parameter, and to generalize the solutions to low rotation speeds where the stratification parameter may be O (1), and to apply for dissimilar gases considering the difference in molecular mass of the two species. Secondly, we would like to compare the predictions with molecular simulations based on the direct simulation Monte Carlo (DSMC) method for rarefied gas flows, in order to quantify the errors resulting from the approximations at different aspect ratios, Reynolds number and stratification parameter. In this study, we have obtained analytical and numerical solutions for the secondary flows generated at the cylinder curved surface and at the end-caps due to linear wall temperature gradient and external gas inflow/outflow at the axis of the cylinder. The effect of sources of mass, momentum and energy within the flow domain are also analyzed. The results of the analytical solutions are compared with the results of DSMC simulations for three types of forcing, a wall temperature gradient, inflow/outflow of gas along the axis, and mass/momentum input due to inserts within the flow. The comparison reveals that the boundary conditions in the simulations and analysis have to be matched with care. The commonly used diffuse reflection boundary conditions at solid walls in DSMC simulations result in a non-zero slip velocity as well as a temperature slip (gas temperature at the wall is different from wall temperature). These have to be incorporated in the analysis in order to make quantitative predictions. In the case of mass/momentum/energy sources within the flow, it is necessary to ensure that the homogeneous boundary conditions are accurately satisfied in the simulations. When these precautions are taken, there is excellent agreement between analysis and simulations, to within 10 %, even when the stratification parameter is as low as 0.707, the Reynolds number is as low as 100 and the aspect ratio (length/diameter) of the cylinder is as low as 2, and the secondary flow velocity is as high as 0.2 times the maximum base flow velocity.Keywords: rotating flows, generalized onsager and carrier-Maslen model, DSMC simulations, rarefied gas flow
Procedia PDF Downloads 398107 A Comprehensive Study of Spread Models of Wildland Fires
Authors: Manavjit Singh Dhindsa, Ursula Das, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran
Abstract:
These days, wildland fires, also known as forest fires, are more prevalent than ever. Wildfires have major repercussions that affect ecosystems, communities, and the environment in several ways. Wildfires lead to habitat destruction and biodiversity loss, affecting ecosystems and causing soil erosion. They also contribute to poor air quality by releasing smoke and pollutants that pose health risks, especially for individuals with respiratory conditions. Wildfires can damage infrastructure, disrupt communities, and cause economic losses. The economic impact of firefighting efforts, combined with their direct effects on forestry and agriculture, causes significant financial difficulties for the areas impacted. This research explores different forest fire spread models and presents a comprehensive review of various techniques and methodologies used in the field. A forest fire spread model is a computational or mathematical representation that is used to simulate and predict the behavior of a forest fire. By applying scientific concepts and data from empirical studies, these models attempt to capture the intricate dynamics of how a fire spreads, taking into consideration a variety of factors like weather patterns, topography, fuel types, and environmental conditions. These models assist authorities in understanding and forecasting the potential trajectory and intensity of a wildfire. Emphasizing the need for a comprehensive understanding of wildfire dynamics, this research explores the approaches, assumptions, and findings derived from various models. By using a comparison approach, a critical analysis is provided by identifying patterns, strengths, and weaknesses among these models. The purpose of the survey is to further wildfire research and management techniques. Decision-makers, researchers, and practitioners can benefit from the useful insights that are provided by synthesizing established information. Fire spread models provide insights into potential fire behavior, facilitating authorities to make informed decisions about evacuation activities, allocating resources for fire-fighting efforts, and planning for preventive actions. Wildfire spread models are also useful in post-wildfire mitigation strategies as they help in assessing the fire's severity, determining high-risk regions for post-fire dangers, and forecasting soil erosion trends. The analysis highlights the importance of customized modeling approaches for various circumstances and promotes our understanding of the way forest fires spread. Some of the known models in this field are Rothermel’s wildland fuel model, FARSITE, WRF-SFIRE, FIRETEC, FlamMap, FSPro, cellular automata model, and others. The key characteristics that these models consider include weather (includes factors such as wind speed and direction), topography (includes factors like landscape elevation), and fuel availability (includes factors like types of vegetation) among other factors. The models discussed are physics-based, data-driven, or hybrid models, also utilizing ML techniques like attention-based neural networks to enhance the performance of the model. In order to lessen the destructive effects of forest fires, this initiative aims to promote the development of more precise prediction tools and effective management techniques. The survey expands its scope to address the practical needs of numerous stakeholders. Access to enhanced early warning systems enables decision-makers to take prompt action. Emergency responders benefit from improved resource allocation strategies, strengthening the efficacy of firefighting efforts.Keywords: artificial intelligence, deep learning, forest fire management, fire risk assessment, fire simulation, machine learning, remote sensing, wildfire modeling
Procedia PDF Downloads 81106 Impact of Wastewater Irrigation on Soil Quality and Productivity of Tuberose (Polianthes tuberosa L. cv. Prajwal)
Authors: D. S. Gurjar, R. Kaur, K. P. Singh, R. Singh
Abstract:
A greater volume of wastewater generate from urban areas in India. Due to the adequate availability, less energy requirement and nutrient richness, farmers of urban and peri-urban areas are deliberately using wastewater to grow high value vegetable crops. Wastewater contains pathogens and toxic pollutants, which can enter in the food chain system while using wastewater for irrigating vegetable crops. Hence, wastewater can use for growing commercial flower crops that may avoid food chain contamination. Tuberose (Polianthes tuberosa L.) is one of the most important commercially grown, cultivated over 30, 000 ha area, flower crop in India. Its popularity is mainly due to the sweet fragrance as well as the long keeping quality of the flower spikes. The flower spikes of tuberose has high market price and usually blooms during summer and rainy seasons when there is meager supply of other flowers in the market. It has high irrigation water requirement and fresh water supply is inadequate in tuberose growing areas of India. Therefore, wastewater may fulfill the water and nutrients requirements and may enhance the productivity of tuberose. Keeping in view, the present study was carried out at WTC farm of ICAR-Indian Agricultural Research Institute, New Delhi in 2014-15. Prajwal was the variety of test crop. The seven treatments were taken as T-1. Wastewater irrigation at 0.6 ID/CPE, T-2: Wastewater irrigation at 0.8 ID/CPE, T-3: Wastewater irrigation at 1.0 ID/CPE, T-4: Wastewater irrigation at 1.2 ID/CPE, T-5: Wastewater irrigation at 1.4 ID/CPE, T-6: Conjunctive use of Groundwater and Wastewater irrigation at 1.0 ID/CPE in cyclic mode, T-7: Control (Groundwater irrigation at 1.0 ID/CPE) in randomized block design with three replication. Wastewater and groundwater samples were collected on monthly basis (April 2014 to March 2015) and analyzed for different parameters of irrigation quality (pH, EC, SAR, RSC), pollution hazard (BOD, toxic heavy metals and Faecal coliforms) and nutrients potential (N, P, K, Cu, Fe, Mn, Zn) as per standard methods. After harvest of tuberose crop, soil samples were also collected and analyzed for different parameters of soil quality as per standard methods. The vegetative growth and flower parameters were recorded at flowering stage of tuberose plants. Results indicated that wastewater samples had higher nutrient potential, pollution hazard as compared to groundwater used in experimental crop. Soil quality parameters such as pH EC, available phosphorous & potassium and heavy metals (Cu, Fe, Mn, Zn, Cd. Pb, Ni, Cr, Co, As) were not significantly changed whereas organic carbon and available nitrogen were significant higher in the treatments where wastewater irrigations were given at 1.2 and 1.4 ID/CPE as compared to groundwater irrigations. Significantly higher plant height (68.47 cm), leaves per plant (78.35), spike length (99.93 cm), rachis length (37.40 cm), numbers of florets per spike (56.53), cut spike yield (0.93 lakh/ha) and loose flower yield (8.5 t/ha) were observed in the treatment of Wastewater irrigation at 1.2 ID/CPE. Study concluded that given quality of wastewater improves the productivity of tuberose without an adverse impact on soil quality/health. However, its long term impacts need to be further evaluated.Keywords: conjunctive use, irrigation, tuberose, wastewater
Procedia PDF Downloads 331105 Assessment of Tidal Influence in Spatial and Temporal Variations of Water Quality in Masan Bay, Korea
Abstract:
Slack-tide sampling was carried out at seven stations at high and low tides for a tidal cycle, in summer (7, 8, 9) and fall (10), 2016 to determine the differences of water quality according to tides in Masan Bay. The data were analyzed by Pearson correlation and factor analysis. The mixing state of all the water quality components investigated is well explained by the correlation with salinity (SAL). Turbidity (TURB), dissolved silica (DSi), nitrite and nitrate nitrogen (NNN) and total nitrogen (TN), which find their way into the bay from the streams and have no internal source and sink reaction, showed a strong negative correlation with SAL at low tide, indicating the property of conservative mixing. On the contrary, in summer and fall, dissolved oxygen (DO), hydrogen sulfide (H2S) and chemical oxygen demand with KMnO4 (CODMn) of the surface and bottom water, which were sensitive to an internal source and sink reaction, showed no significant correlation with SAL at high and low tides. The remaining water quality parameters showed a conservative or a non-conservative mixing pattern depending on the mixing characteristics at high and low tides, determined by the functional relationship between the changes of the flushing time and the changes of the characteristics of water quality components of the end-members in the bay. Factor analysis performed on the concentration difference data sets between high and low tides helped in identifying the principal latent variables for them. The concentration differences varied spatially and temporally. Principal factors (PFs) scores plots for each monitoring situation showed high associations of the variations to the monitoring sites. At sampling station 1 (ST1), temperature (TEMP), SAL, DSi, TURB, NNN and TN of the surface water in summer, TEMP, SAL, DSi, DO, TURB, NNN, TN, reactive soluble phosphorus (RSP) and total phosphorus (TP) of the bottom water in summer, TEMP, pH, SAL, DSi, DO, TURB, CODMn, particulate organic carbon (POC), ammonia nitrogen (AMN), NNN, TN and fecal coliform (FC) of the surface water in fall, TEMP, pH, SAL, DSi, H2S, TURB, CODMn, AMN, NNN and TN of the bottom water in fall commonly showed up as the most significant parameters and the large concentration differences between high and low tides. At other stations, the significant parameters showed differently according to the spatial and temporal variations of mixing pattern in the bay. In fact, there is no estuary that always maintains steady-state flow conditions. The mixing regime of an estuary might be changed at any time from linear to non-linear, due to the change of flushing time according to the combination of hydrogeometric properties, inflow of freshwater and tidal action, And furthermore the change of end-member conditions due to the internal sinks and sources makes the occurrence of concentration difference inevitable. Therefore, when investigating the water quality of the estuary, it is necessary to take a sampling method considering the tide to obtain average water quality data.Keywords: conservative mixing, end-member, factor analysis, flushing time, high and low tide, latent variables, non-conservative mixing, slack-tide sampling, spatial and temporal variations, surface and bottom water
Procedia PDF Downloads 130104 Sinhala Sign Language to Grammatically Correct Sentences using NLP
Authors: Anjalika Fernando, Banuka Athuraliya
Abstract:
This paper presents a comprehensive approach for converting Sinhala Sign Language (SSL) into grammatically correct sentences using Natural Language Processing (NLP) techniques in real-time. While previous studies have explored various aspects of SSL translation, the research gap lies in the absence of grammar checking for SSL. This work aims to bridge this gap by proposing a two-stage methodology that leverages deep learning models to detect signs and translate them into coherent sentences, ensuring grammatical accuracy. The first stage of the approach involves the utilization of a Long Short-Term Memory (LSTM) deep learning model to recognize and interpret SSL signs. By training the LSTM model on a dataset of SSL gestures, it learns to accurately classify and translate these signs into textual representations. The LSTM model achieves a commendable accuracy rate of 94%, demonstrating its effectiveness in accurately recognizing and translating SSL gestures. Building upon the successful recognition and translation of SSL signs, the second stage of the methodology focuses on improving the grammatical correctness of the translated sentences. The project employs a Neural Machine Translation (NMT) architecture, consisting of an encoder and decoder with LSTM components, to enhance the syntactical structure of the generated sentences. By training the NMT model on a parallel corpus of Sinhala wrong sentences and their corresponding grammatically correct translations, it learns to generate coherent and grammatically accurate sentences. The NMT model achieves an impressive accuracy rate of 98%, affirming its capability to produce linguistically sound translations. The proposed approach offers significant contributions to the field of SSL translation and grammar correction. Addressing the critical issue of grammar checking, it enhances the usability and reliability of SSL translation systems, facilitating effective communication between hearing-impaired and non-sign language users. Furthermore, the integration of deep learning techniques, such as LSTM and NMT, ensures the accuracy and robustness of the translation process. This research holds great potential for practical applications, including educational platforms, accessibility tools, and communication aids for the hearing-impaired. Furthermore, it lays the foundation for future advancements in SSL translation systems, fostering inclusive and equal opportunities for the deaf community. Future work includes expanding the existing datasets to further improve the accuracy and generalization of the SSL translation system. Additionally, the development of a dedicated mobile application would enhance the accessibility and convenience of SSL translation on handheld devices. Furthermore, efforts will be made to enhance the current application for educational purposes, enabling individuals to learn and practice SSL more effectively. Another area of future exploration involves enabling two-way communication, allowing seamless interaction between sign-language users and non-sign-language users.In conclusion, this paper presents a novel approach for converting Sinhala Sign Language gestures into grammatically correct sentences using NLP techniques in real time. The two-stage methodology, comprising an LSTM model for sign detection and translation and an NMT model for grammar correction, achieves high accuracy rates of 94% and 98%, respectively. By addressing the lack of grammar checking in existing SSL translation research, this work contributes significantly to the development of more accurate and reliable SSL translation systems, thereby fostering effective communication and inclusivity for the hearing-impaired communityKeywords: Sinhala sign language, sign Language, NLP, LSTM, NMT
Procedia PDF Downloads 104103 Understanding Systemic Barriers (and Opportunities) to Increasing Uptake of Subcutaneous Medroxy Progesterone Acetate Self-Injection in Health Facilities in Nigeria
Authors: Oluwaseun Adeleke, Samuel O. Ikani, Fidelis Edet, Anthony Nwala, Mopelola Raji, Simeon Christian Chukwu
Abstract:
Background: The DISC project collaborated with partners to implement demand creation and service delivery interventions, including the MoT (Moment of Truth) innovation, in over 500 health facilities across 15 states. This has increased the voluntary conversion rate to self-injection among women who opt for injectable contraception. While some facilities recorded an increasing trend in key performance indicators, few others persistently performed sub-optimally due to provider and system-related barriers. Methodology: Twenty-two facilities performing sub-optimally were selected purposively from three Nigerian states. Low productivity was appraised using low reporting rates and poor SI conversion rates as indicators. Interviews were conducted with health providers across these health facilities using a rapid diagnosis tool. The project also conducted a data quality assessment that evaluated the veracity of data elements reported across the three major sources of family planning data in the facility. Findings: The inability and sometimes refusal of providers to support clients to self-inject effectively was associated with the misunderstanding of its value to their work experience. It was also observed that providers still held a strong influence over clients’ method choices. Furthermore, providers held biases and misconceptions about DMPA-SC that restricted the access of obese clients and new acceptors to services – a clear departure from the recommendations of the national guidelines. Additionally, quality of care standards was compromised because job aids were not used to inform service delivery. Facilities performing sub-optimally often under-reported DMPA-SC utilization data, and there were multiple uncoordinated responsibilities for recording and reporting. Additionally, data validation meetings were not regularly convened, and these meetings were ineffective in authenticating data received from health facilities. Other reasons for sub-optimal performance included poor documentation and tracking of stock inventory resulting in commodity stockouts, low client flow because of poor positioning of health facilities, and ineffective messaging. Some facilities lacked adequate human and material resources to provide services effectively and received very few supportive supervision visits. Supportive supervision visits and Data Quality Audits have been useful to address the aforementioned performance barriers. The project has deployed digital DMPA-SC self-injection checklists that have been aligned with nationally approved templates. During visits, each provider and community mobilizer is accorded special attention by the supervisor until he/she can perform procedures in line with best practice (protocol). Conclusion: This narrative provides a summary of a range of factors that identify health facilities performing sub-optimally in their provision of DMPA-SC services. Findings from this assessment will be useful during project design to inform effective strategies. As the project enters its final stages of implementation, it is transitioning high-impact activities to state institutions in the quest to sustain the quality of service beyond the tenure of the project. The project has flagged activities, as well as created protocols and tools aimed at placing state-level stakeholders at the forefront of improving productivity in health facilities.Keywords: family planning, contraception, DMPA-SC, self-care, self-injection, barriers, opportunities, performance
Procedia PDF Downloads 79102 Role of Baseline Measurements in Assessing Air Quality Impact of Shale Gas Operations
Authors: Paula Costa, Ana Picado, Filomena Pinto, Justina Catarino
Abstract:
Environmental impact associated with large scale shale gas development is of major concern to the public, policy makers and other stakeholders. To assess this impact on the atmosphere, it is important to monitoring ambient air quality prior to and during all shale gas operation stages. Baseline observations can provide a standard of the pre-shale gas development state of the environment. The lack of baseline concentrations was identified as an important knowledge gap to assess the impact of emissions to the air due to shale gas operations. In fact baseline monitoring of air quality are missing in several regions, where there is a strong possibility of future shale gas exploration. This makes it difficult to properly identify, quantify and characterize environmental impacts that may be associated with shale gas development. The implementation of a baseline air monitoring program is imperative to be able to assess the total emissions related with shale gas operations. In fact, any monitoring programme should be designed to provide indicative information on background levels. A baseline air monitoring program should identify and characterize targeted air pollutants, most frequently described from monitoring and emission measurements, as well as those expected from hydraulic fracturing activities, and establish ambient air conditions prior to start-up of potential emission sources from shale gas operations. This program has to be planned for at least one year accounting for ambient variations. In the literature, in addition to GHG emissions of CH4, CO2 and nitrogen oxides (NOx), fugitive emissions from shale gas production can release volatile organic compounds (VOCs), aldehydes (formaldehyde, acetaldehyde) and hazardous air pollutants (HAPs). The VOCs include a.o., benzene, toluene, ethyl benzene, xylenes, hexanes, 2,2,4-trimethylpentane, styrene. The concentrations of six air pollutants (ozone, particulate matter (PM), carbon monoxide (CO), nitrogen oxides (NOx), sulphur oxides (SOx), and lead) whose regional ambient air levels are regulated by the Environmental Protection Agency (EPA), are often discussed. However, the main concern in the emissions to air associated to shale gas operations, seems to be the leakage of methane. Methane is identified as a compound of major concern due to its strong global warming potential. The identification of methane leakage from shale gas activities is complex due to the existence of several other CH4 sources (e.g. landfill, agricultural activity or gas pipeline/compressor station). An integrated monitoring study of methane emissions may be a suitable mean of distinguishing the contribution of different sources of methane to ambient levels. All data analysis needs to be carefully interpreted taking, also, into account the meteorological conditions of the site. This may require the implementation of a more intensive monitoring programme. So, it is essential the development of a low-cost sampling strategy, suitable for establishing pre-operations baseline data as well as an integrated monitoring program to assess the emissions from shale gas operation sites. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 640715.Keywords: air emissions, baseline, green house gases, shale gas
Procedia PDF Downloads 330101 Revolutionizing Oil Palm Replanting: Geospatial Terrace Design for High-precision Ground Implementation Compared to Conventional Methods
Authors: Nursuhaili Najwa Masrol, Nur Hafizah Mohammed, Nur Nadhirah Rusyda Rosnan, Vijaya Subramaniam, Sim Choon Cheak
Abstract:
Replanting in oil palm cultivation is vital to enable the introduction of planting materials and provides an opportunity to improve the road, drainage, terrace design, and planting density. Oil palm replanting is fundamentally necessary every 25 years. The adoption of the digital replanting blueprint is imperative as it can assist the Malaysia Oil Palm industry in addressing challenges such as labour shortages and limited expertise related to replanting tasks. Effective replanting planning should commence at least 6 months prior to the actual replanting process. Therefore, this study will help to plan and design the replanting blueprint with high-precision translation on the ground. With the advancement of geospatial technology, it is now feasible to engage in thoroughly researched planning, which can help maximize the potential yield. A blueprint designed before replanting is to enhance management’s ability to optimize the planting program, address manpower issues, or even increase productivity. In terrace planting blueprints, geographic tools have been utilized to design the roads, drainages, terraces, and planting points based on the ARM standards. These designs are mapped with location information and undergo statistical analysis. The geospatial approach is essential in precision agriculture and ensuring an accurate translation of design to the ground by implementing high-accuracy technologies. In this study, geospatial and remote sensing technologies played a vital role. LiDAR data was employed to determine the Digital Elevation Model (DEM), enabling the precise selection of terraces, while ortho imagery was used for validation purposes. Throughout the designing process, Geographical Information System (GIS) tools were extensively utilized. To assess the design’s reliability on the ground compared with the current conventional method, high-precision GPS instruments like EOS Arrow Gold and HIPER VR GNSS were used, with both offering accuracy levels between 0.3 cm and 0.5cm. Nearest Distance Analysis was generated to compare the design with actual planting on the ground. The analysis revealed that it could not be applied to the roads due to discrepancies between actual roads and the blueprint design, which resulted in minimal variance. In contrast, the terraces closely adhered to the GPS markings, with the most variance distance being less than 0.5 meters compared to actual terraces constructed. Considering the required slope degrees for terrace planting, which must be greater than 6 degrees, the study found that approximately 65% of the terracing was constructed at a 12-degree slope, while over 50% of the terracing was constructed at slopes exceeding the minimum degrees. Utilizing blueprint replanting promising strategies for optimizing land utilization in agriculture. This approach harnesses technology and meticulous planning to yield advantages, including increased efficiency, enhanced sustainability, and cost reduction. From this study, practical implementation of this technique can lead to tangible and significant improvements in agricultural sectors. In boosting further efficiencies, future initiatives will require more sophisticated techniques and the incorporation of precision GPS devices for upcoming blueprint replanting projects besides strategic progression aims to guarantee the precision of both blueprint design stages and its subsequent implementation on the field. Looking ahead, automating digital blueprints are necessary to reduce time, workforce, and costs in commercial production.Keywords: replanting, geospatial, precision agriculture, blueprint
Procedia PDF Downloads 82100 Interdigitated Flexible Li-Ion Battery by Aerosol Jet Printing
Authors: Yohann R. J. Thomas, Sébastien Solan
Abstract:
Conventional battery technology includes the assembly of electrode/separator/electrode by standard techniques such as stacking or winding, depending on the format size. In that type of batteries, coating or pasting techniques are only used for the electrode process. The processes are suited for large scale production of batteries and perfectly adapted to plenty of application requirements. Nevertheless, as the demand for both easier and cost-efficient production modes, flexible, custom-shaped and efficient small sized batteries is rising. Thin-film, printable batteries are one of the key areas for printed electronics. In the frame of European BASMATI project, we are investigating the feasibility of a new design of lithium-ion battery: interdigitated planar core design. Polymer substrate is used to produce bendable and flexible rechargeable accumulators. Direct fully printed batteries lead to interconnect the accumulator with other electronic functions for example organic solar cells (harvesting function), printed sensors (autonomous sensors) or RFID (communication function) on a common substrate to produce fully integrated, thin and flexible new devices. To fulfill those specifications, a high resolution printing process have been selected: Aerosol jet printing. In order to fit with this process parameters, we worked on nanomaterials formulation for current collectors and electrodes. In addition, an advanced printed polymer-electrolyte is developed to be implemented directly in the printing process in order to avoid the liquid electrolyte filling step and to improve safety and flexibility. Results: Three different current collectors has been studied and printed successfully. An ink of commercial copper nanoparticles has been formulated and printed, then a flash sintering was applied to the interdigitated design. A gold ink was also printed, the resulting material was partially self-sintered and did not require any high temperature post treatment. Finally, carbon nanotubes were also printed with a high resolution and well defined patterns. Different electrode materials were formulated and printed according to the interdigitated design. For cathodes, NMC and LFP were efficaciously printed. For anodes, LTO and graphite have shown to be good candidates for the fully printed battery. The electrochemical performances of those materials have been evaluated in a standard coin cell with lithium-metal counter electrode and the results are similar with those of a traditional ink formulation and process. A jellified plastic crystal solid state electrolyte has been developed and showed comparable performances to classical liquid carbonate electrolytes with two different materials. In our future developments, focus will be put on several tasks. In a first place, we will synthesize and formulate new specific nano-materials based on metal-oxyde. Then a fully printed device will be produced and its electrochemical performance will be evaluated.Keywords: high resolution digital printing, lithium-ion battery, nanomaterials, solid-state electrolytes
Procedia PDF Downloads 25199 Defense Priming from Egg to Larvae in Litopenaeus vannamei with Non-Pathogenic and Pathogenic Bacteria Strains
Authors: Angelica Alvarez-Lee, Sergio Martinez-Diaz, Jose Luis Garcia-Corona, Humberto Lanz-Mendoza
Abstract:
World aquaculture is always looking for improvements to achieve productions with high yields avoiding the infection by pathogenic agents. The best way to achieve this is to know the biological model to create alternative treatments that could be applied in the hatcheries, which results in greater economic gains and improvements in human public health. In the last decade, immunomodulation in shrimp culture with probiotics, organic acids and different carbon sources has gained great interest, mainly in larval and juvenile stages. Immune priming is associated with a strong protective effect against a later pathogen challenge. This work provides another perspective about immunostimulation from spawning until hatching. The stimulation happens during development embryos and generates resistance to infection by pathogenic bacteria. Massive spawnings of white shrimp L. vannamei were obtained and placed in experimental units with 700 mL of sterile seawater at 30 °C, salinity of 28 ppm and continuous aeration at a density of 8 embryos.mL⁻¹. The immunostimulating effect of three death strains of non-pathogenic bacterial (Escherichia coli, Staphylococcus aureus and Bacillus subtilis) and a pathogenic strain for white shrimp (Vibrio parahaemolyticus) was evaluated. The strains killed by heat were adjusted to O.D. 0.5, at A 600 nm, and directly added to the seawater of each unit at a ratio of 1/100 (v/v). A control group of embryos without inoculum of dead bacteria was kept under the same physicochemical conditions as the rest of the treatments throughout the experiment and used as reference. The duration of the stimulus was 12 hours, then, the larvae that hatched were collected, counted and transferred to a new experimental unit (same physicochemical conditions but at a salinity of 28 ppm) to carry out a challenge of infection against the pathogen V. parahaemolyticus, adding directly to seawater an amount 1/100 (v/v) of the live strain adjusted to an OD 0.5; at A 600 nm. Subsequently, 24 hrs after infection, nauplii survival was evaluated. The results of this work shows that, after 24 hrs, the hatching rates of immunostimulated shrimp embryos with the dead strains of B. subtillis and V. parahaemolyticus are significantly higher compared to the rest of the treatments and the control. Furthermore, survival of L. vanammei after a challenge of infection of 24 hrs against the live strain of V. parahaemolyticus is greater (P < 0.05) in the larvae immunostimulated during the embryonic development with the dead strains B. subtillis and V. parahaemolyticus, followed by those that were treated with E. coli. In summary superficial antigens can stimulate the development cells to promote hatching and can have normal development in agreeing with the optical observations, plus exist a differential response effect between each treatment post-infection. This research provides evidence of the immunostimulant effect of death pathogenic and non-pathogenic bacterial strains in the rate of hatching and oversight of shrimp L. vannamei during embryonic and larval development. This research continues evaluating the effect of these death strains on the expression of genes related to the defense priming in larvae of L. vannamei that come from massive spawning in hatcheries before and after the infection challenge against V. parahaemolyticus.Keywords: immunostimulation, L. vannamei, hatching, survival
Procedia PDF Downloads 14298 Evaluation of Functional Properties of Protein Hydrolysate from the Fresh Water Mussel Lamellidens marginalis for Nutraceutical Therapy
Authors: Jana Chakrabarti, Madhushrita Das, Ankhi Haldar, Roshni Chatterjee, Tanmoy Dey, Pubali Dhar
Abstract:
High incidences of Protein Energy Malnutrition as a consequence of low protein intake are quite prevalent among the children in developing countries. Thus prevention of under-nutrition has emerged as a critical challenge to India’s developmental Planners in recent times. Increase in population over the last decade has led to greater pressure on the existing animal protein sources. But these resources are currently declining due to persistent drought, diseases, natural disasters, high-cost of feed, and low productivity of local breeds and this decline in productivity is most evident in some developing countries. So the need of the hour is to search for efficient utilization of unconventional low-cost animal protein resources. Molluscs, as a group is regarded as under-exploited source of health-benefit molecules. Bivalve is the second largest class of phylum Mollusca. Annual harvests of bivalves for human consumption represent about 5% by weight of the total world harvest of aquatic resources. The freshwater mussel Lamellidens marginalis is widely distributed in ponds and large bodies of perennial waters in the Indian sub-continent and well accepted as food all over India. Moreover, ethno-medicinal uses of the flesh of Lamellidens among the rural people to treat hypertension have been documented. Present investigation thus attempts to evaluate the potential of Lamellidens marginalis as functional food. Mussels were collected from freshwater ponds and brought to the laboratory two days before experimentation for acclimatization in laboratory conditions. Shells were removed and fleshes were preserved at- 20oC until analysis. Tissue homogenate was prepared for proximate studies. Fatty acids and amino acids composition were analyzed. Vitamins, Minerals and Heavy metal contents were also studied. Mussel Protein hydrolysate was prepared using Alcalase 2.4 L and degree of hydrolysis was evaluated to analyze its Functional properties. Ferric Reducing Antioxidant Power (FRAP) and DPPH Antioxidant assays were performed. Anti-hypertensive property was evaluated by measuring Angiotensin Converting Enzyme (ACE) inhibition assay. Proximate analysis indicates that mussel meat contains moderate amount of protein (8.30±0.67%), carbohydrate (8.01±0.38%) and reducing sugar (4.75±0.07%), but less amount of fat (1.02±0.20%). Moisture content is quite high but ash content is very low. Phospholipid content is significantly high (19.43 %). Lipid constitutes, substantial amount of eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) which have proven prophylactic values. Trace elements are found present in substantial amount. Comparative study of proximate nutrients between Labeo rohita, Lamellidens and cow’s milk indicates that mussel meat can be used as complementary food source. Functionality analyses of protein hydrolysate show increase in Fat absorption, Emulsification, Foaming capacity and Protein solubility. Progressive anti-oxidant and anti-hypertensive properties have also been documented. Lamellidens marginalis can thus be regarded as a functional food source as this may combine effectively with other food components for providing essential elements to the body. Moreover, mussel protein hydrolysate provides opportunities for utilizing it in various food formulations and pharmaceuticals. The observations presented herein should be viewed as a prelude to what future holds.Keywords: functional food, functional properties, Lamellidens marginalis, protein hydrolysate
Procedia PDF Downloads 41897 Effect of Methoxy and Polyene Additional Functionalized Group on the Photocatalytic Properties of Polyene-Diphenylaniline Organic Chromophores for Solar Energy Applications
Authors: Ife Elegbeleye, Nnditshedzeni Eric, Regina Maphanga, Femi Elegbeleye, Femi Agunbiade
Abstract:
The global potential of other renewable energy sources such as wind, hydroelectric, bio-mass, and geothermal is estimated to be approximately 13 %, with hydroelectricity constituting a larger percentage. Sunlight provides by far the largest of all carbon-neutral energy sources. More energy from the sunlight strikes the Earth in one hour (4.3 × 1020 J) than all the energy consumed on the planet in a year (4.1 × 1020 J), hence, solar energy remains the most abundant clean, renewable energy resources for mankind. Photovoltaic (PV) devices such as silicon solar cells, dye sensitized solar cells are utilized for harnessing solar energy. Polyene-diphenylaniline organic molecules are important sets of molecules that has stirred many research interest as photosensitizers in TiO₂ semiconductor-based dye sensitized solar cells (DSSCs). The advantages of organic dye molecule over metal-based complexes are higher extinction coefficient, moderate cost, good environmental compatibility, and electrochemical properties. The polyene-diphenylaniline organic dyes with basic configuration of donor-π-acceptor are affordable, easy to synthesize and possess chemical structures that can easily be modified to optimize their photocatalytic and spectral properties. The enormous interest in polyene-diphenylaniline dyes as photosensitizers is due to their fascinating spectral properties which include visible light to near infra-red-light absorption. In this work, density functional theory approach via GPAW software, Avogadro and ASE were employed to study the effect of methoxy functionalized group on the spectral properties of polyene-diphenylaniline dyes and their photons absorbing characteristics in the visible region to near infrared region of the solar spectrum. Our results showed that the two-phenyl based complexes D5 and D7 exhibits maximum absorption peaks at 750 nm and 850 nm, while D9 and D11 with methoxy group shows maximum absorption peak at 800 nm and 900 nm respectively. The highest absorption wavelength is notable for D9 and D11 containing additional polyene and methoxy groups. Also, D9 and D11 chromophores with the methoxy group shows lower energy gap of 0.98 and 0.85 respectively than the corresponding D5 and D7 dyes complexes with energy gap of 1.32 and 1.08. The analysis of their electron injection kinetics ∆Ginject into the band gap of TiO₂ shows that D9 and D11 with the methoxy group has higher electron injection kinetics of -2.070 and -2.030 than the corresponding polyene-diphenylaniline complexes without the addition of polyene group with ∆Ginject values of -2.820 and -2.130 respectively. Our findings suggest that the addition of functionalized group as an extension of the organic complexes results in higher light harvesting efficiencies and bathochromic shift of the absorption spectra to higher wavelength which suggest higher current densities and open circuit voltage in DSSCs. The study suggests that the photocatalytic properties of organic chromophores/complexes with donor-π-acceptor configuration can be enhanced by the addition of functionalized groups.Keywords: renewable energy resource, solar energy, dye sensitized solar cells, polyene-diphenylaniline organic chromophores
Procedia PDF Downloads 11196 The Development, Composition, and Implementation of Vocalises as a Method of Technical Training for the Adult Musical Theatre Singer
Authors: Casey Keenan Joiner, Shayna Tayloe
Abstract:
Classical voice training for the novice singer has long relied on the guidance and instruction of vocalise collections, such as those written and compiled by Marchesi, Lütgen, Vaccai, and Lamperti. These vocalise collections purport to encourage healthy vocal habits and instill technical longevity in both aspiring and established singers, though their scope has long been somewhat confined to the classical idiom. For pedagogues and students specializing in other vocal genres, such as musical theatre and CCM (contemporary commercial music,) low-impact and pertinent vocal training aids are in short supply, and much of the suggested literature derives from classical methodology. While the tenants of healthy vocal production remain ubiquitous, specific stylistic needs and technical emphases differ from genre to genre and may require a specified extension of vocal acuity. As musical theatre continues to grow in popularity at both the professional and collegiate levels, the need for specialized training grows as well. Pedagogical literature geared specifically towards musical theatre (MT) singing and vocal production, while relatively uncommon, is readily accessible to the contemporary educator. Practitioners such as Norman Spivey, Mary Saunders Barton, Claudia Friedlander, Wendy Leborgne, and Marci Rosenberg continue to publish relevant research in the field of musical theatre voice pedagogy and have successfully identified many common MT vocal faults, their subsequent diagnoses, and their eventual corrections. Where classical methodology would suggest specific vocalises or training exercises to maintain corrected vocal posture following successful fault diagnosis, musical theatre finds itself without a relevant body of work towards which to transition. By analyzing the existing vocalise literature by means of a specialized set of parameters, including but not limited to melodic variation, rhythmic complexity, vowel utilization, and technical targeting, we have composed a set of vocalises meant specifically to address the training and conditioning of adult musical theatre voices. These vocalises target many pedagogical tenants in the musical theatre genre, including but not limited to thyroarytenoid-dominant production, twang resonance, lateral vowel formation, and “belt-mix.” By implementing these vocalises in the musical theatre voice studio, pedagogues can efficiently communicate proper musical theatre vocal posture and kinesthetic connection to their students, regardless of age or level of experience. The composition of these vocalises serves MT pedagogues on both a technical level as well as a sociological one. MT is a relative newcomer on the collegiate stage and the academization of musical theatre methodologies has been a slow and arduous process. The conflation of classical and MT techniques and training methods has long plagued the world of voice pedagogy and teachers often find themselves in positions of “cross-training,” that is, teaching students of both genres in one combined voice studio. As MT continues to establish itself on academic platforms worldwide, genre-specific literature and focused studies are both rare and invaluable. To ensure that modern students receive exacting and definitive training in their chosen fields, it becomes increasingly necessary for genres such as musical theatre to boast specified literature and a collection of musical theatre-specific vocalises only aids in this effort. This collection of musical theatre vocalises is the first of its kind and provides genre-specific studios with a basis upon which to grow healthy, balanced voices built for the harsh conditions of the modern theatre stage.Keywords: voice pedagogy, targeted methodology, musical theatre, singing
Procedia PDF Downloads 156