Search results for: combined extremes
800 Minimization of the Abrasion Effect of Fiber Reinforced Polymer Matrix on Stainless Steel Injection Nozzle through the Application of Laser Hardening Technique
Authors: Amessalu Atenafu Gelaw, Nele Rath
Abstract:
Currently, laser hardening process is becoming among the most efficient and effective hardening technique due to its significant advantages. The source where heat is generated, the absence of cooling media, self-quenching property, less distortion nature due to localized heat input, environmental friendly behavior and less time to finish the operation are among the main benefits to adopt this technology. This day, a variety of injection machines are used in plastic, textile, electrical and mechanical industries. Due to the fast growing of composite technology, fiber reinforced polymer matrix becoming optional solution to use in these industries. Due, to the abrasion nature of fiber reinforced polymer matrix composite on the injection components, many parts are outdated before the design period. Niko, a company specialized in injection molded products, suffers from the short lifetime of the injection nozzles of the molds, due to the use of fiber reinforced and, therefore, more abrasive polymer matrix. To prolong the lifetime of these molds, hardening the susceptible component like the injecting nozzles was a must. In this paper, the laser hardening process is investigated on Unimax, a type of stainless steel. The investigation to get optimal results for the nozzle-case was performed in three steps. First, the optimal parameters for maximum possible hardenability for the investigated nozzle material is investigated on a flat sample, using experimental testing as well as thermal simulation. Next, the effect of an inclination on the maximum temperature is analyzed both by experimental testing and validation through simulation. Finally, the data combined and applied for the nozzle. This paper describes possible strategies and methods for laser hardening of the nozzle to reach hardness of at least 720 HV for the material investigated. It has been proven, that the nozzle can be laser hardened to over 900 HV with the option of even higher results when more precise positioning of the laser can be assured.Keywords: absorptivity, fiber reinforced matrix, laser hardening, Nd:YAG laser
Procedia PDF Downloads 156799 Sources and Potential Ecological Risks of Heavy Metals in the Sediment Samples From Coastal Area in Ondo, Southwest Nigeria
Authors: Ogundele Lasun Tunde, Ayeku Oluwagbemiga Patrick
Abstract:
Heavy metals are released into the sediments in aquatic environment from both natural and anthropogenic sources and they are considered as worldwide issue due to their deleterious ecological risks and food chain disruption. In this study, sediments samples were collected at three major sites (Awoye, Abereke and Ayetoro) along Ondo coastal area using VanVeen grab sampler. The concentrations of As, Cd, Cr, Cu, Fe, Mn, Ni, Pb, V and Zn were determined by employing Atomic Absorption Spectroscopy (AAS). The combined concentrations data were subjected to Positive Matrix Factorization (PMF) receptor approach for source identification and apportionment. The probable risks that might be posed by heavy metals in the sediment were estimated by potential and integrated ecological risks indices. Among the measured heavy metals, Fe had the average concentrations of 20.38 ± 2.86, 23.56 ± 4.16 and 25.32 ± 4.83 lg/g at Abereke, Awoye and Ayetoro sites, respectively. The PMF resulted in identification of four sources of heavy metals in the sediments. The resolved sources and their percentage contributions were oil exploration (39%), industrial waste/sludge (35%), detrital process (18%) and Mn-sources (8%). Oil exploration activities and industrial wastes are the major sources that contribute heavy metals into the coastal sediments. The major pollutants that posed ecological risks to the local aquatic ecosystem are As, Pb, Cr and Cd (40 B Ei ≤ 80) classifying the sites as moderate risk. The integrate risks values of Awoye, Abereke and Ayetoro are 231.2, 234.0 and 236.4, respectively suggesting that the study areas had a moderate ecological risk. The study showed the suitability of PMF receptor model for source identification of heavy metals in the sediments. Also, the intensive anthropogenic activities and natural sources could largely discharge heavy metals into the study area, which may increase the heavy metal contents of the sediments and further contribute to the associated ecological risk, thus affecting the local aquatic ecosystem.Keywords: positive matrix factorization, sediments, heavy metals, sources, ecological risks
Procedia PDF Downloads 21798 Comparison Of Virtual Non-Contrast To True Non-Contrast Images Using Dual Layer Spectral Computed Tomography
Authors: O’Day Luke
Abstract:
Purpose: To validate virtual non-contrast reconstructions generated from dual-layer spectral computed tomography (DL-CT) data as an alternative for the acquisition of a dedicated true non-contrast dataset during multiphase contrast studies. Material and methods: Thirty-three patients underwent a routine multiphase clinical CT examination, using Dual-Layer Spectral CT, from March to August 2021. True non-contrast (TNC) and virtual non-contrast (VNC) datasets, generated from both portal venous and arterial phase imaging were evaluated. For every patient in both true and virtual non-contrast datasets, a region-of-interest (ROI) was defined in aorta, liver, fluid (i.e. gallbladder, urinary bladder), kidney, muscle, fat and spongious bone, resulting in 693 ROIs. Differences in attenuation for VNC and TNV images were compared, both separately and combined. Consistency between VNC reconstructions obtained from the arterial and portal venous phase was evaluated. Results: Comparison of CT density (HU) on the VNC and TNC images showed a high correlation. The mean difference between TNC and VNC images (excluding bone results) was 5.5 ± 9.1 HU and > 90% of all comparisons showed a difference of less than 15 HU. For all tissues but spongious bone, the mean absolute difference between TNC and VNC images was below 10 HU. VNC images derived from the arterial and the portal venous phase showed a good correlation in most tissue types. The aortic attenuation was somewhat dependent however on which dataset was used for reconstruction. Bone evaluation with VNC datasets continues to be a problem, as spectral CT algorithms are currently poor in differentiating bone and iodine. Conclusion: Given the increasing availability of DL-CT and proven accuracy of virtual non-contrast processing, VNC is a promising tool for generating additional data during routine contrast-enhanced studies. This study shows the utility of virtual non-contrast scans as an alternative for true non-contrast studies during multiphase CT, with potential for dose reduction, without loss of diagnostic information.Keywords: dual-layer spectral computed tomography, virtual non-contrast, true non-contrast, clinical comparison
Procedia PDF Downloads 141797 A New Binder Mineral for Cement Stabilized Road Pavements Soils
Authors: Aydın Kavak, Özkan Coruk, Adnan Aydıner
Abstract:
Long-term performance of pavement structures is significantly impacted by the stability of the underlying soils. In situ subgrades often do not provide enough support required to achieve acceptable performance under traffic loading and environmental demands. NovoCrete® is a powder binder-mineral for cement stabilized road pavements soils. NovoCrete® combined with Portland cement at optimum water content increases the crystallize formations during the hydration process, resulting in higher strengths, neutralizes pH levels, and provides water impermeability. These changes in soil properties may lead to transforming existing unsuitable in-situ materials into suitable fill materials. The main features of NovoCrete® are: They are applicable to all types of soil, reduce premature cracking and improve soil properties, creating base and subbase course layers with high bearing capacity by reducing hazardous materials. It can be used also for stabilization of recyclable aggregates and old asphalt pavement aggregate, etc. There are many applications in Germany, Turkey, India etc. In this paper, a few field application in Turkey will be discussed. In the road construction works, this binder material is used for cement stabilization works. In the applications 120-180 kg cement is used for 1 m3 of soil with a 2 % of binder NovoCrete® material for the stabilization. The results of a plate loading test in a road construction site show 1 mm deformation which is very small under 7 kg/cm2 loading. The modulus of subgrade reaction increase from 611 MN/m3 to 3673 MN/m3.The soaked CBR values for stabilized soils increase from 10-20 % to 150-200 %. According to these data weak subgrade soil can be used as a base or sub base after the modification. The potential reduction in the need for quarried materials will help conserve natural resources. The use of on-site or nearby materials in fills, will significantly reduce transportation costs and provide both economic and environmental benefits.Keywords: soil, stabilization, cement, binder, Novocrete, additive
Procedia PDF Downloads 221796 The Integrated Methodological Development of Reliability, Risk and Condition-Based Maintenance in the Improvement of the Thermal Power Plant Availability
Authors: Henry Pariaman, Iwa Garniwa, Isti Surjandari, Bambang Sugiarto
Abstract:
Availability of a complex system of thermal power plant is strongly influenced by the reliability of spare parts and maintenance management policies. A reliability-centered maintenance (RCM) technique is an established method of analysis and is the main reference for maintenance planning. This method considers the consequences of failure in its implementation, but does not deal with further risk of down time that associated with failures, loss of production or high maintenance costs. Risk-based maintenance (RBM) technique provides support strategies to minimize the risks posed by the failure to obtain maintenance task considering cost effectiveness. Meanwhile, condition-based maintenance (CBM) focuses on monitoring the application of the conditions that allow the planning and scheduling of maintenance or other action should be taken to avoid the risk of failure prior to the time-based maintenance. Implementation of RCM, RBM, CBM alone or combined RCM and RBM or RCM and CBM is a maintenance technique used in thermal power plants. Implementation of these three techniques in an integrated maintenance will increase the availability of thermal power plants compared to the use of maintenance techniques individually or in combination of two techniques. This study uses the reliability, risks and conditions-based maintenance in an integrated manner to increase the availability of thermal power plants. The method generates MPI (Priority Maintenance Index) is RPN (Risk Priority Number) are multiplied by RI (Risk Index) and FDT (Failure Defense Task) which can generate the task of monitoring and assessment of conditions other than maintenance tasks. Both MPI and FDT obtained from development of functional tree, failure mode effects analysis, fault-tree analysis, and risk analysis (risk assessment and risk evaluation) were then used to develop and implement a plan and schedule maintenance, monitoring and assessment of the condition and ultimately perform availability analysis. The results of this study indicate that the reliability, risks and conditions-based maintenance methods, in an integrated manner can increase the availability of thermal power plants.Keywords: integrated maintenance techniques, availability, thermal power plant, MPI, FDT
Procedia PDF Downloads 794795 DNA-Polycation Condensation by Coarse-Grained Molecular Dynamics
Authors: Titus A. Beu
Abstract:
Many modern gene-delivery protocols rely on condensed complexes of DNA with polycations to introduce the genetic payload into cells by endocytosis. In particular, polyethyleneimine (PEI) stands out by a high buffering capacity (enabling the efficient condensation of DNA) and relatively simple fabrication. Realistic computational studies can offer essential insights into the formation process of DNA-PEI polyplexes, providing hints on efficient designs and engineering routes. We present comprehensive computational investigations of solvated PEI and DNA-PEI polyplexes involving calculations at three levels: ab initio, all-atom (AA), and coarse-grained (CG) molecular mechanics. In the first stage, we developed a rigorous AA CHARMM (Chemistry at Harvard Macromolecular Mechanics) force field (FF) for PEI on the basis of accurate ab initio calculations on protonated model pentamers. We validated this atomistic FF by matching the results of extensive molecular dynamics (MD) simulations of structural and dynamical properties of PEI with experimental data. In a second stage, we developed a CG MARTINI FF for PEI by Boltzmann inversion techniques from bead-based probability distributions obtained from AA simulations and ensuring an optimal match between the AA and CG structural and dynamical properties. In a third stage, we combined the developed CG FF for PEI with the standard MARTINI FF for DNA and performed comprehensive CG simulations of DNA-PEI complex formation and condensation. Various technical aspects which are crucial for the realistic modeling of DNA-PEI polyplexes, such as options of treating electrostatics and the relevance of polarizable water models, are discussed in detail. Massive CG simulations (with up to 500 000 beads) shed light on the mechanism and provide time scales for DNA polyplex formation independence of PEI chain size and protonation pattern. The DNA-PEI condensation mechanism is shown to primarily rely on the formation of DNA bundles, rather than by changes of the DNA-strand curvature. The gained insights are expected to be of significant help for designing effective gene-delivery applications.Keywords: DNA condensation, gene-delivery, polyethylene-imine, molecular dynamics.
Procedia PDF Downloads 119794 Agricultural Organized Areas Approach for Resilience to Droughts, Nutrient Cycle and Rural and Wild Fires
Authors: Diogo Pereira, Maria Moura, Joana Campos, João Nunes
Abstract:
As the Ukraine war highlights the European Economic Area’s vulnerability and external dependence on feed and food, agriculture gains significant importance. Transformative change is necessary to reach a sustainable and resilient agricultural sector. Agriculture is an important drive for bioeconomy and the equilibrium and survival of society and rural fires resilience. The pressure of (1) water stress, (2) nutrient cycle, and (3) social demographic evolution towards 70% of the population in Urban systems and the aging of the rural population, combined with climate change, exacerbates the problem and paradigm of rural and wildfires, especially in Portugal. The Portuguese territory is characterized by (1) 28% of marginal land, (2) the soil quality of 70% of the territory not being appropriate for agricultural activity, (3) a micro smallholding, with less than 1 ha per proprietor, with mainly familiar and traditional agriculture in the North and Centre regions, and (4) having the most vulnerable areas for rural fires in these same regions. The most important difference between the South, North and Centre of Portugal, referring to rural and wildfires, is the agricultural activity, which has a higher level in the South. In Portugal, rural and wildfires represent an average annual economic loss of around 800 to 1000 million euros. The WinBio model is an agrienvironmental metabolism design, with the capacity to create a new agri-food metabolism through Agricultural Organized Areas, a privatepublic partnership. This partnership seeks to grow agricultural activity in regions with (1) abandoned territory, (2) micro smallholding, (3) water and nutrient management necessities, and (4) low agri-food literacy. It also aims to support planning and monitoring of resource use efficiency and sustainability of territories, using agriculture as a barrier for rural and wildfires in order to protect rural population.Keywords: agricultural organized areas, residues, climate change, drought, nutrients, rural and wild fires
Procedia PDF Downloads 78793 Combined Tarsal Coalition Resection and Arthroereisis in Treatment of Symptomatic Rigid Flat Foot in Pediatric Population
Authors: Michael Zaidman, Naum Simanovsky
Abstract:
Introduction. Symptomatic tarsal coalition with rigid flat foot often demands operative solution. An isolated coalition resection does not guarantee pain relief; correction of co-existing foot deformity may be required. The objective of the study was to analyze the results of combination of tarsal coalition resection and arthroereisis. Patients and methods. We retrospectively reviewed medical records and radiographs of children operatively treated in our institution for symptomatic calcaneonavicular or talocalcaneal coalition between the years 2019 and 2022. Eight patients (twelve feet), 4 boys and 4 girls with mean age 11.2 years, were included in the study. In six patients (10 feet) calcaneonavicular coalition was diagnosed, two patients (two feet) sustained talonavicular coalition. To quantify degrees of foot deformity, we used calcaneal pitch angle, lateral talar-first metatarsal (Meary's) angle, and talonavicular coverage angle. The clinical results were assessed using the American Orthopaedic Foot and Ankle Society (AOFAS) Ankle Hindfoot Score. Results. The mean follow-up was 28 month. The preoperative mean talonavicular coverage angle was 17,75º as compared with postoperative mean angle of 5.4º. The calcaneal pitch angle improved from mean 6,8º to 16,4º. The mean preoperative Meary’s angle of -11.3º improved to mean 2.8º. The preoperative mean AOFAS score improved from 54.7 to 93.1 points post-operatively. In nine of twelve feet, overall clinical outcome judged by AOFAS scale was excellent (90-100 points), in three feet was good (80-90 points). Six patients (ten feet) obviously improved their subtalar range of motion. Conclusion. For symptomatic stiff or rigid flat feet associated with tarsal coalition, the combination of coalition resection and arthroereisis leads to normalization of radiographic parameters, clinical and functional improvement with good patient’s satisfaction and likely to be more effective than the isolated procedures.Keywords: rigid flat foot, tarsal coalition resection, arthroereisis, outcome
Procedia PDF Downloads 64792 Investigating the Motion of a Viscous Droplet in Natural Convection Using the Level Set Method
Authors: Isadora Bugarin, Taygoara F. de Oliveira
Abstract:
Binary fluids and emulsions, in general, are present in a vast range of industrial, medical, and scientific applications, showing complex behaviors responsible for defining the flow dynamics and the system operation. However, the literature describing those highlighted fluids in non-isothermal models is currently still limited. The present work brings a detailed investigation on droplet migration due to natural convection in square enclosure, aiming to clarify the effects of drop viscosity on the flow dynamics by showing how distinct viscosity ratios (droplet/ambient fluid) influence the drop motion and the final movement pattern kept on stationary regimes. The analysis was taken by observing distinct combinations of Rayleigh number, drop initial position, and viscosity ratios. The Navier-Stokes and Energy equations were solved considering the Boussinesq approximation in a laminar flow using the finite differences method combined with the Level Set method for binary flow solution. Previous results collected by the authors showed that the Rayleigh number and the drop initial position affect drastically the motion pattern of the droplet. For Ra ≥ 10⁴, two very marked behaviors were observed accordingly with the initial position: the drop can travel either a helical path towards the center or a cyclic circular path resulting in a closed cycle on the stationary regime. The variation of viscosity ratio showed a significant alteration of pattern, exposing a large influence on the droplet path, capable of modifying the flow’s behavior. Analyses on viscosity effects on the flow’s unsteady Nusselt number were also performed. Among the relevant contributions proposed in this work is the potential use of the flow initial conditions as a mechanism to control the droplet migration inside the enclosure.Keywords: binary fluids, droplet motion, level set method, natural convection, viscosity
Procedia PDF Downloads 119791 An Artificial Intelligence Framework to Forecast Air Quality
Authors: Richard Ren
Abstract:
Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms
Procedia PDF Downloads 127790 A Cooperative Signaling Scheme for Global Navigation Satellite Systems
Authors: Keunhong Chae, Seokho Yoon
Abstract:
Recently, the global navigation satellite system (GNSS) such as Galileo and GPS is employing more satellites to provide a higher degree of accuracy for the location service, thus calling for a more efficient signaling scheme among the satellites used in the overall GNSS network. In that the network throughput is improved, the spatial diversity can be one of the efficient signaling schemes; however, it requires multiple antenna that could cause a significant increase in the complexity of the GNSS. Thus, a diversity scheme called the cooperative signaling was proposed, where the virtual multiple-input multiple-output (MIMO) signaling is realized with using only a single antenna in the transmit satellite of interest and with modeling the neighboring satellites as relay nodes. The main drawback of the cooperative signaling is that the relay nodes receive the transmitted signal at different time instants, i.e., they operate in an asynchronous way, and thus, the overall performance of the GNSS network could degrade severely. To tackle the problem, several modified cooperative signaling schemes were proposed; however, all of them are difficult to implement due to a signal decoding at the relay nodes. Although the implementation at the relay nodes could be simpler to some degree by employing the time-reversal and conjugation operations instead of the signal decoding, it would be more efficient if we could implement the operations of the relay nodes at the source node having more resources than the relay nodes. So, in this paper, we propose a novel cooperative signaling scheme, where the data signals are combined in a unique way at the source node, thus obviating the need of the complex operations such as signal decoding, time-reversal and conjugation at the relay nodes. The numerical results confirm that the proposed scheme provides the same performance in the cooperative diversity and the bit error rate (BER) as the conventional scheme, while reducing the complexity at the relay nodes significantly. Acknowledgment: This work was supported by the National GNSS Research Center program of Defense Acquisition Program Administration and Agency for Defense Development.Keywords: global navigation satellite network, cooperative signaling, data combining, nodes
Procedia PDF Downloads 280789 Reconstruction Post-mastectomy: A Literature Review on Its Indications and Techniques
Authors: Layaly Ayoub, Mariana Ribeiro
Abstract:
Introduction: Breast cancer is currently considered the leading cause of cancer-related deaths among women in Brazil. Mastectomy, essential in this treatment, often necessitates subsequent breast reconstruction to restore physical appearance and aid in the emotional and psychological recovery of patients. The choice between immediate or delayed reconstruction is influenced by factors such as the type and stage of cancer, as well as the patient's overall health. The decision between autologous breast reconstruction or implant-based reconstruction requires a detailed analysis of individual conditions and needs. Objectives: This study analyzes the techniques and indications used in post-mastectomy breast reconstruction. Methodology: Literature review conducted in the PubMed and SciELO databases, focusing on articles that met the inclusion and exclusion criteria and descriptors. Results: After mastectomy, breast reconstruction is commonly performed. It is necessary to determine the type of technique to be used in each case depending on the specific characteristics of each patient. The tissue expander technique is indicated for patients with sufficient skin and tissue post-mastectomy, who do not require additional radiotherapy, and who opt for a less complex surgery with a shorter recovery time. This procedure promotes the gradual expansion of soft tissues where the definitive implant will be placed. Both temporary and permanent expanders offer flexibility, allowing for adjustment in the expander size until the desired volume is reached, enabling the skin and tissues to adapt to the breast implant area. Conversely, autologous reconstruction is indicated for patients who will undergo radiotherapy, have insufficient tissue, and prefer a more natural solution. This technique uses the transverse rectus abdominis muscle (TRAM) flap, the latissimus dorsi muscle flap, the gluteal flap, and local muscle flaps to shape a new breast, potentially combined with a breast implant. Conclusion: In this context, it is essential to conduct a thorough evaluation regarding the technique to be applied, as both have their benefits and challenges.Keywords: indications, post-mastectomy, breast reconstruction, techniques
Procedia PDF Downloads 29788 Development of a Two-Step 'Green' Process for (-) Ambrafuran Production
Authors: Lucia Steenkamp, Chris V. D. Westhuyzen, Kgama Mathiba
Abstract:
Ambergris, and more specifically its oxidation product (–)-ambrafuran, is a scarce, valuable, and sought-after perfumery ingredient. The material is used as a fixative agent to stabilise perfumes in formulations by reducing the evaporation rate of volatile substances. Ambergris is a metabolic product of the sperm whale (Physeter macrocephatus L.), resulting from intestinal irritation. Chemically, (–)-ambrafuran is produced from the natural product sclareol in eight synthetic steps – in the process using harsh and often toxic chemicals to do so. An overall yield of no more than 76% can be achieved in some routes, but generally, this is lower. A new 'green' route has been developed in our laboratory in which sclareol, extracted from the Clary sage plant, is converted to (–)-ambrafuran in two steps with an overall yield in excess of 80%. The first step uses a microorganism, Hyphozyma roseoniger, to bioconvert sclareol to an intermediate diol using substrate concentrations up to 50g/L. The yield varies between 90 and 67% depending on the substrate concentration used. The purity of the diol product is 95%, and the diol is used without further purification in the next step. The intermediate diol is then cyclodehydrated to the final product (–)-ambrafuran using a zeolite, which is not harmful to the environment and is readily recycled. The yield of the product is 96%, and following a single recrystallization, the purity of the product is > 99.5%. A preliminary LC-MS study of the bioconversion identified several intermediates produced in the fermentation broth under oxygen-restricted conditions. Initially, a short-lived ketone is produced in equilibrium with a more stable pyranol, a key intermediate in the process. The latter is oxidised under Norrish type I cleavage conditions to yield an acetate, which is hydrolysed either chemically or under lipase action to afford the primary fermentation product, an intermediate diol. All the intermediates identified point to the likely CYP450 action as the key enzyme(s) in the mechanism. This invention is an exceptional example of how the power of biocatalysis, combined with a mild, benign chemical step, can be deployed to replace a total chemical synthesis of a specific chiral antipode of a commercially relevant material.Keywords: ambrafuran, biocatalysis, fragrance, microorganism
Procedia PDF Downloads 226787 Effects of Branched-Chain Amino Acid Supplementation on Sarcopenic Patients with Liver Cirrhosis
Authors: Deepak Nathiya1, Ramesh Roop Rai, Pratima Singh1, Preeti Raj1, Supriya Suman, Balvir Singh Tomar
Abstract:
Background: Sarcopenia is a catabolic state in liver cirrhosis (LC), accelerated with a breakdown of skeletal muscle to release amino acids which adversely affects survival, health-related quality of life, and response to any underlying disease. The primary objective of the study was to investigate the long-term effect of branched-chain amino acids (BCAA) supplementations on parameters associated with improved prognosis in sarcopenic patients with LC, as well as to evaluate its impact on cirrhotic-related events. Methods: We carried out a 24 week, single-center, randomized, open-label, controlled, two cohort parallel-group intervention trial comparing the efficacy of BCAA against lactoalbumin (L-ALB) on 106 sarcopenic liver cirrhotics. The BCAA (intervention) group was treated with 7.2 g BCAA per whereas, the lactoalbumin group was also given 6.3 g of L-Albumin. The primary outcome was to assess the impact of BCAA on parameters of sarcopenia: muscle mass, muscle strength, and physical performance. The secondary outcomes were to study combined survival and maintenance of liver function changes in laboratory and clinical markers in the duration of six months. Results: Treatment with BCAA leads to significant improvement in sarcopenic parameters: muscle strength, muscle function, and muscle mass. The total cirrhotic-related complications and cumulative event-free survival occurred fewer in the BCAA group than in the L-ALB group. Prognostic markers also improved significantly in the study. Conclusion: The current clinical trial demonstrated that long-term BCAAs supplementation improved sarcopenia and prognostic markers in patients with advanced liver cirrhosis.Keywords: sarcopenia, liver cirrhosis, BCAA, quality of life
Procedia PDF Downloads 136786 Development of an Optimised, Automated Multidimensional Model for Supply Chains
Authors: Safaa H. Sindi, Michael Roe
Abstract:
This project divides supply chain (SC) models into seven Eras, according to the evolution of the market’s needs throughout time. The five earliest Eras describe the emergence of supply chains, while the last two Eras are to be created. Research objectives: The aim is to generate the two latest Eras with their respective models that focus on the consumable goods. Era Six contains the Optimal Multidimensional Matrix (OMM) that incorporates most characteristics of the SC and allocates them into four quarters (Agile, Lean, Leagile, and Basic SC). This will help companies, especially (SMEs) plan their optimal SC route. Era Seven creates an Automated Multidimensional Model (AMM) which upgrades the matrix of Era six, as it accounts for all the supply chain factors (i.e. Offshoring, sourcing, risk) into an interactive system with Heuristic Learning that helps larger companies and industries to select the best SC model for their market. Methodologies: The data collection is based on a Fuzzy-Delphi study that analyses statements using Fuzzy Logic. The first round of Delphi study will contain statements (fuzzy rules) about the matrix of Era six. The second round of Delphi contains the feedback given from the first round and so on. Preliminary findings: both models are applicable, Matrix of Era six reduces the complexity of choosing the best SC model for SMEs by helping them identify the best strategy of Basic SC, Lean, Agile and Leagile SC; that’s tailored to their needs. The interactive heuristic learning in the AMM of Era seven will help mitigate error and aid large companies to identify and re-strategize the best SC model and distribution system for their market and commodity, hence increasing efficiency. Potential contributions to the literature: The problematic issue facing many companies is to decide which SC model or strategy to incorporate, due to the many models and definitions developed over the years. This research simplifies this by putting most definition in a template and most models in the Matrix of era six. This research is original as the division of SC into Eras, the Matrix of Era six (OMM) with Fuzzy-Delphi and Heuristic Learning in the AMM of Era seven provides a synergy of tools that were not combined before in the area of SC. Additionally the OMM of Era six is unique as it combines most characteristics of the SC, which is an original concept in itself.Keywords: Leagile, automation, heuristic learning, supply chain models
Procedia PDF Downloads 389785 Characterising Indigenous Chicken (Gallus gallus domesticus) Ecotypes of Tigray, Ethiopia: A Combined Approach Using Ecological Niche Modelling and Phenotypic Distribution Modelling
Authors: Gebreslassie Gebru, Gurja Belay, Minister Birhanie, Mulalem Zenebe, Tadelle Dessie, Adriana Vallejo-Trujillo, Olivier Hanotte
Abstract:
Livestock must adapt to changing environmental conditions, which can result in either phenotypic plasticity or irreversible phenotypic change. In this study, we combine Ecological Niche Modelling (ENM) and Phenotypic Distribution Modelling (PDM) to provide a comprehensive framework for understanding the ecological and phenotypic characteristics of indigenous chicken (Gallus gallus domesticus) ecotypes. This approach helped us to classify these ecotypes, differentiate their phenotypic traits, and identify associations between environmental variables and adaptive traits. We measured 297 adult indigenous chickens from various agro-ecologies, including 208 females and 89 males. A subset of the 22 measured traits was selected using stepwise selection, resulting in seven traits for each sex. Using ENM, we identified four agro-ecologies potentially harbouring distinct phenotypes of indigenous Tigray chickens. However, PDM classified these chickens into three phenotypical ecotypes. Chickens grouped in ecotype-1 and ecotype-3 exhibited superior adaptive traits compared to those in ecotype-2, with significant variance observed. This high variance suggests a broader range of trait expression within these ecotypes, indicating greater adaptation capacity and potentially more diverse genetic characteristics. Several environmental variables, such as soil clay content, forest cover, and mean temperature of the wettest quarter, were strongly associated with most phenotypic traits. This suggests that these environmental factors play a role in shaping the observed phenotypic variations. By integrating ENM and PDM, this study enhances our understanding of indigenous chickens' ecological and phenotypic diversity. It also provides valuable insights into their conservation and management in response to environmental changes.Keywords: adaptive traits, agro-ecology, appendage, climate, environment, imagej, morphology, phenotypic variation
Procedia PDF Downloads 32784 Processing and Characterization of Aluminum Matrix Composite Reinforced with Amorphous Zr₃₇.₅Cu₁₈.₆₇Al₄₃.₉₈ Phase
Authors: P. Abachi, S. Karami, K. Purazrang
Abstract:
The amorphous reinforcements (metallic glasses) can be considered as promising options for reinforcing light-weight aluminum and its alloys. By using the proper type of reinforcement, one can overcome to drawbacks such as interfacial de-cohesion and undesirable reactions which can be created at ceramic particle and metallic matrix interface. In this work, the Zr-based amorphous phase was produced via mechanical milling of elemental powders. Based on Miedema semi-empirical Model and diagrams for formation enthalpies and/or Gibbs free energies of Zr-Cu amorphous phase in comparison with the crystalline phase, the glass formability range was predicted. The composite was produced using the powder mixture of the aluminum and metallic glass and spark plasma sintering (SPS) at the temperature slightly above the glass transition Tg of the metallic glass particles. The selected temperature and rapid sintering route were suitable for consolidation of an aluminum matrix without crystallization of amorphous phase. To characterize amorphous phase formation, X-ray diffraction (XRD) phase analyses were performed on powder mixture after specified intervals of milling. The microstructure of the composite was studied by optical and scanning electron microscope (SEM). Uniaxial compression tests were carried out on composite specimens with the dimension of 4 mm long and a cross-section of 2 ˟ 2mm2. The micrographs indicated an appropriate reinforcement distribution in the metallic matrix. The comparison of stress–strain curves of the consolidated composite and the non-reinforced Al matrix alloy in compression showed that the enhancement of yield strength and mechanical strength are combined with an appreciable plastic strain at fracture. It can be concluded that metallic glasses (amorphous phases) are alternative reinforcement material for lightweight metal matrix composites capable of producing high strength and adequate ductility. However, this is in the expense of minor density increase.Keywords: aluminum matrix composite, amorphous phase, mechanical alloying, spark plasma sintering
Procedia PDF Downloads 364783 A Remote Sensing Approach to Estimate the Paleo-Discharge of the Lost Saraswati River of North-West India
Authors: Zafar Beg, Kumar Gaurav
Abstract:
The lost Saraswati is described as a large perennial river which was 'lost' in the desert towards the end of the Indus-Saraswati civilisation. It has been proposed earlier that the lost Saraswati flowed in the Sutlej-Yamuna interfluve, parallel to the present day Indus River. It is believed that one of the earliest known ancient civilizations, the 'Indus-Saraswati civilization' prospered along the course of the Saraswati River. The demise of the Indus civilization is considered to be due to desiccation of the river. Today in the Sutlej-Yamuna interfluve, we observe an ephemeral river, known as Ghaggar. It is believed that along with the Ghaggar River, two other Himalayan Rivers Sutlej and Yamuna were tributaries of the lost Saraswati and made a significant contribution to its discharge. Presence of a large number of archaeological sites and the occurrence of thick fluvial sand bodies in the subsurface in the Sutlej-Yamuna interfluve has been used to suggest that the Saraswati River was a large perennial river. Further, the wider course of about 4-7 km recognized from satellite imagery of Ghaggar-Hakra belt in between Suratgarh and Anupgarh strengthens this hypothesis. Here we develop a methodology to estimate the paleo discharge and paleo width of the lost Saraswati River. In doing so, we rely on the hypothesis which suggests that the ancient Saraswati River used to carry the combined flow or some part of the Yamuna, Sutlej and Ghaggar catchments. We first established a regime relationship between the drainage area-channel width and catchment area-discharge of 29 different rivers presently flowing on the Himalayan Foreland from Indus in the west to the Brahmaputra in the East. We found the width and discharge of all the Himalayan rivers scale in a similar way when they are plotted against their corresponding catchment area. Using these regime curves, we calculate the width and discharge of paleochannels originating from the Sutlej, Yamuna and Ghaggar rivers by measuring their corresponding catchment area from satellite images. Finally, we add the discharge and width obtained from each of the individual catchments to estimate the paleo width and paleo discharge respectively of the Saraswati River. Our regime curves provide a first-order estimate of the paleo discharge of the lost Saraswati.Keywords: Indus civilization, palaeochannel, regime curve, Saraswati River
Procedia PDF Downloads 179782 Comparison of Regional and Local Indwelling Catheter Techniques to Prolong Analgesia in Total Knee Arthroplasty Procedures: Continuous Peripheral Nerve Block and Continuous Periarticular Infiltration
Authors: Jared Cheves, Amanda DeChent, Joyce Pan
Abstract:
Total knee replacements (TKAs) are one of the most common but painful surgical procedures performed in the United States. Currently, the gold standard for postoperative pain management is the utilization of opioids. However, in the wake of the opioid epidemic, the healthcare system is attempting to reduce opioid consumption by trialing innovative opioid sparing analgesic techniques such as continuous peripheral nerve blocks (CPNB) and continuous periarticular infiltration (CPAI). The alleviation of pain, particularly during the first 72 hours postoperatively, is of utmost importance due to its association with delayed recovery, impaired rehabilitation, immunosuppression, the development of chronic pain, the development of rebound pain, and decreased patient satisfaction. While both CPNB and CPAI are being used today, there is limited evidence comparing the two to the current standard of care or to each other. An extensive literature review was performed to explore the safety profiles and effectiveness of CPNB and CPAI in reducing reported pain scores and decreasing opioid consumption. The literature revealed the usage of CPNB contributed to lower pain scores and decreased opioid use when compared to opioid-only control groups. Additionally, CPAI did not improve pain scores or decrease opioid consumption when combined with a multimodal analgesic (MMA) regimen. When comparing CPNB and CPAI to each other, neither unanimously lowered pain scores to a greater degree, but the literature indicates that CPNB decreased opioid consumption more than CPAI. More research is needed to further cement the efficacy of CPNB and CPAI as standard components of MMA in TKA procedures. In addition, future research can also focus on novel catheter-free applications to reduce the complications of continuous catheter analgesics.Keywords: total knee arthroplasty, continuous peripheral nerve blocks, continuous periarticular infiltration, opioid, multimodal analgesia
Procedia PDF Downloads 96781 Approaching the Spatial Multi-Objective Land Use Planning Problems at Mountain Areas by a Hybrid Meta-Heuristic Optimization Technique
Authors: Konstantinos Tolidis
Abstract:
The mountains are amongst the most fragile environments in the world. The world’s mountain areas cover 24% of the Earth’s land surface and are home to 12% of the global population. A further 14% of the global population is estimated to live in the vicinity of their surrounding areas. As urbanization continues to increase in the world, the mountains are also key centers for recreation and tourism; their attraction is often heightened by their remarkably high levels of biodiversity. Due to the fact that the features in mountain areas vary spatially (development degree, human geography, socio-economic reality, relations of dependency and interaction with other areas-regions), the spatial planning on these areas consists of a crucial process for preserving the natural, cultural and human environment and consists of one of the major processes of an integrated spatial policy. This research has been focused on the spatial decision problem of land use allocation optimization which is an ordinary planning problem on the mountain areas. It is a matter of fact that such decisions must be made not only on what to do, how much to do, but also on where to do, adding a whole extra class of decision variables to the problem when combined with the consideration of spatial optimization. The utility of optimization as a normative tool for spatial problem is widely recognized. However, it is very difficult for planners to quantify the weights of the objectives especially when these are related to mountain areas. Furthermore, the land use allocation optimization problems at mountain areas must be addressed not only by taking into account the general development objectives but also the spatial objectives (e.g. compactness, compatibility and accessibility, etc). Therefore, the main research’s objective was to approach the land use allocation problem by utilizing a hybrid meta-heuristic optimization technique tailored to the mountain areas’ spatial characteristics. The results indicates that the proposed methodological approach is very promising and useful for both generating land use alternatives for further consideration in land use allocation decision-making and supporting spatial management plans at mountain areas.Keywords: multiobjective land use allocation, mountain areas, spatial planning, spatial decision making, meta-heuristic methods
Procedia PDF Downloads 347780 Multi-Stage Optimization of Local Environmental Quality by Comprehensive Computer Simulated Person as Sensor for Air Conditioning Control
Authors: Sung-Jun Yoo, Kazuhide Ito
Abstract:
In this study, a comprehensive computer simulated person (CSP) that integrates computational human model (virtual manikin) and respiratory tract model (virtual airway), was applied for estimation of indoor environmental quality. Moreover, an inclusive prediction method was established by integrating computational fluid dynamics (CFD) analysis with advanced CSP which is combined with physiologically-based pharmacokinetic (PBPK) model, unsteady thermoregulation model for analysis targeting micro-climate around human body and respiratory area with high accuracy. This comprehensive method can estimate not only the contaminant inhalation but also constant interaction in the contaminant transfer between indoor spaces, i.e., a target area for indoor air quality (IAQ) assessment, and respiratory zone for health risk assessment. This study focused on the usage of the CSP as an air/thermal quality sensor in indoors, which means the application of comprehensive model for assessment of IAQ and thermal environmental quality. Demonstrative analysis was performed in order to examine the applicability of the comprehensive model to the heating, ventilation, air conditioning (HVAC) control scheme. CSP was located at the center of the simple model room which has dimension of 3m×3m×3m. Formaldehyde which is generated from floor material was assumed as a target contaminant, and flow field, sensible/latent heat and contaminant transfer analysis in indoor space were conducted by using CFD simulation coupled with CSP. In this analysis, thermal comfort was evaluated by thermoregulatory analysis, and respiratory exposure risks represented by adsorption flux/concentration at airway wall surface were estimated by PBPK-CFD hybrid analysis. These Analysis results concerning IAQ and thermal comfort will be fed back to the HVAC control and could be used to find a suitable ventilation rate and energy requirement for air conditioning system.Keywords: CFD simulation, computer simulated person, HVAC control, indoor environmental quality
Procedia PDF Downloads 361779 Study of the Relationship between the Civil Engineering Parameters and the Floating of Buoy Model Which Made from Expanded Polystyrene-Mortar
Authors: Panarat Saengpanya
Abstract:
There were five objectives in this study including the study of housing type with water environment, the physical and mechanical properties of the buoy material, the mechanical properties of the buoy models, the floating of the buoy models and the relationship between the civil engineering parameters and the floating of the buoy. The buoy examples made from Expanded Polystyrene (EPS) covered by 5 mm thickness of mortar with the equal thickness on each side. Specimens are 0.05 m cubes tested at a displacement rate of 0.005 m/min. The existing test method used to assess the parameters relationship is ASTM C 109 to provide comparative results. The results found that the three type of housing with water environment were Stilt Houses, Boat House, and Floating House. EPS is a lightweight material that has been used in engineering applications since at least the 1950s. Its density is about a hundredth of that of mortar, while the mortar strength was found 72 times of EPS. One of the advantage of composite is that two or more materials could be combined to take advantage of the good characteristics of each of the material. The strength of the buoy influenced by mortar while the floating influenced by EPS. Results showed the buoy example compressed under loading. The Stress-Strain curve showed the high secant modulus before reached the peak value. The failure occurred within 10% strain then the strength reduces while the strain was continuing. It was observed that the failure strength reduced by increasing the total volume of examples. For the buoy examples with same area, an increase of the failure strength is found when the high dimension is increased. The results showed the relationship between five parameters including the floating level, the bearing capacity, the volume, the high dimension and the unit weight. The study found increases in high of buoy lead to corresponding decreases in both modulus and compressive strength. The total volume and the unit weight had relationship with the bearing capacity of the buoy.Keywords: floating house, buoy, floating structure, EPS
Procedia PDF Downloads 146778 Pavement Management for a Metropolitan Area: A Case Study of Montreal
Authors: Luis Amador Jimenez, Md. Shohel Amin
Abstract:
Pavement performance models are based on projections of observed traffic loads, which makes uncertain to study funding strategies in the long run if history does not repeat. Neural networks can be used to estimate deterioration rates but the learning rate and momentum have not been properly investigated, in addition, economic evolvement could change traffic flows. This study addresses both issues through a case study for roads of Montreal that simulates traffic for a period of 50 years and deals with the measurement error of the pavement deterioration model. Travel demand models are applied to simulate annual average daily traffic (AADT) every 5 years. Accumulated equivalent single axle loads (ESALs) are calculated from the predicted AADT and locally observed truck distributions combined with truck factors. A back propagation Neural Network (BPN) method with a Generalized Delta Rule (GDR) learning algorithm is applied to estimate pavement deterioration models capable of overcoming measurement errors. Linear programming of lifecycle optimization is applied to identify M&R strategies that ensure good pavement condition while minimizing the budget. It was found that CAD 150 million is the minimum annual budget to good condition for arterial and local roads in Montreal. Montreal drivers prefer the use of public transportation for work and education purposes. Vehicle traffic is expected to double within 50 years, ESALS are expected to double the number of ESALs every 15 years. Roads in the island of Montreal need to undergo a stabilization period for about 25 years, a steady state seems to be reached after.Keywords: pavement management system, traffic simulation, backpropagation neural network, performance modeling, measurement errors, linear programming, lifecycle optimization
Procedia PDF Downloads 460777 Efficiency Validation of Hybrid Geothermal and Radiant Cooling System Implementation in Hot and Humid Climate Houses of Saudi Arabia
Authors: Jamil Hijazi, Stirling Howieson
Abstract:
Over one-quarter of the Kingdom of Saudi Arabia’s total oil production (2.8 million barrels a day) is used for electricity generation. The built environment is estimated to consume 77% of the total energy production. Of this amount, air conditioning systems consume about 80%. Apart from considerations surrounding global warming and CO2 production it has to be recognised that oil is a finite resource and the KSA like many other oil rich countries will have to start to consider a horizon where hydro-carbons are not the dominant energy resource. The employment of hybrid ground cooling pipes in combination with black body solar collection and radiant night cooling systems may have the potential to displace a significant proportion of oil currently used to run conventional air conditioning plant. This paper presents an investigation into the viability of such hybrid systems with the specific aim of reducing carbon emissions while providing all year round thermal comfort in a typical Saudi Arabian urban housing block. At the outset air and soil temperatures were measured in the city of Jeddah. A parametric study then was carried out by computational simulation software (Design Builder) that utilised the field measurements and predicted the cooling energy consumption of both a base case and an ideal scenario (typical block retro-fitted with insulation, solar shading, ground pipes integrated with hypocaust floor slabs/ stack ventilation and radiant cooling pipes embed in floor).Initial simulation results suggest that careful ‘ecological design’ combined with hybrid radiant and ground pipe cooling techniques can displace air conditioning systems, producing significant cost and carbon savings (both capital and running) without appreciable deprivation of amenity.Keywords: energy efficiency, ground pipe, hybrid cooling, radiative cooling, thermal comfort
Procedia PDF Downloads 262776 Gamma Irradiated Sodium Alginate and Phosphorus Fertilizer Enhances Seed Trigonelline Content, Biochemical Parameters and Yield Attributes of Fenugreek (Trigonella foenum-graecum L.)
Authors: Tariq Ahmad Dar, Moinuddin, M. Masroor A. Khan
Abstract:
There is considerable need in enhancing the content and yield of active constituents of medicinal plants keeping in view their massive demand worldwide. Different strategies have been employed to enhance the active constituents of medicinal plants and the use of phytohormones has been proved effective in this regard. Gamma-irradiated Sodium alginate (ISA) is known to elicit an array of plant defense responses and biological activities in plants. Considering the medicinal importance, a pot experiment was conducted to explore the effect of ISA and phosphorus on growth, yield and quality of fenugreek (Trigonella foenum-graecum L.). ISA spray treatments (0, 40, 80 and 120 mg L-1) were applied alone and in combination with 40 kg P ha-1 (P40). Crop performance was assessed in terms of plant growth characteristics, physiological attributes, seed yield and the content of seed trigonelline. Of the ten-treatments, P40 + 80 mg L−1 of ISA proved the best. The results showed that foliar spray of ISA alone or in combination with P40 augmented the plant vegetative growth, enzymatic activities, trigonelline content, trigonelline yield and economic yield of fenugreek. Application of 80 mg L−1 of ISA applied with P40 gave the best results for almost all the parameters studied compared to control or to 80 mg L−1 of ISA applied alone. This treatment increased the total content of chlorophyll, carotenoids, leaf -N, -P and -K and trigonelline compared to the control by 24.85 and 27.40%, 15 and 23.52%, 18.70 and 16.84%, 15.88 and 18.92%, 12 and 14.44%, at 60 and 90 DAS respectively. The combined application of 80 mg L−1 of ISA along with P40 resulted in the maximum increase in seed yield, trigonelline content and trigonelline yield by146, 34 and 232.41%, respectively, over the control. Gel permeation chromatography revealed the formation of low molecular weight fractions in ISA samples, containing even less than 20,000 molecular weight oligomers, which might be responsible for plant growth promotion in this study. Trigonelline content was determined by reverse phase high performance liquid chromatography (HPLC) with C-18 column.Keywords: gamma-irradiated sodium alginate, phosphorus, gel permeation chromatography, HPLC, trigonelline content, yield
Procedia PDF Downloads 321775 Stress Evaluation at Lower Extremity during Walking with Unstable Shoe
Authors: Sangbaek Park, Seungju Lee, Soo-Won Chae
Abstract:
Unstable shoes are known to strengthen lower extremity muscles and improve gait ability and to change the user’s gait pattern. The change in gait pattern affects human body enormously because the walking is repetitive and steady locomotion in daily life. It is possible to estimate the joint motion including joint moment, force and inertia effect using kinematic and kinetic analysis. However, the change of internal stress at the articular cartilage has not been possible to estimate. The purpose of this research is to evaluate the internal stress of human body during gait with unstable shoes. In this study, FE analysis was combined with motion capture experiment to obtain the boundary condition and loading condition during walking. Motion capture experiments were performed with a participant during walking with normal shoes and with unstable shoes. Inverse kinematics and inverse kinetic analysis was performed with OpenSim. The joint angle and muscle forces were estimated as results of inverse kinematics and kinetics analysis. A detailed finite element (FE) lower extremity model was constructed. The joint coordinate system was added to the FE model and the joint coordinate system was coincided with OpenSim model’s coordinate system. Finally, the joint angles at each phase of gait were used to transform the FE model’s posture according to actual posture from motion capture. The FE model was transformed into the postures of three major phases (1st peak of ground reaction force, mid stance and 2nd peak of ground reaction force). The direction and magnitude of muscle force were estimated by OpenSim and were applied to the FE model’s attachment point of each muscle. Then FE analysis was performed to compare the stress at knee cartilage during gait with normal shoes and unstable shoes.Keywords: finite element analysis, gait analysis, human model, motion capture
Procedia PDF Downloads 323774 Disaster Response Training Simulator Based on Augmented Reality, Virtual Reality, and MPEG-DASH
Authors: Sunho Seo, Younghwan Shin, Jong-Hong Park, Sooeun Song, Junsung Kim, Jusik Yun, Yongkyun Kim, Jong-Moon Chung
Abstract:
In order to effectively cope with large and complex disasters, disaster response training is needed. Recently, disaster response training led by the ROK (Republic of Korea) government is being implemented through a 4 year R&D project, which has several similar functions as the HSEEP (Homeland Security Exercise and Evaluation Program) of the United States, but also has several different features as well. Due to the unpredictiveness and diversity of disasters, existing training methods have many limitations in providing experience in the efficient use of disaster incident response and recovery resources. Always, the challenge is to be as efficient and effective as possible using the limited human and material/physical resources available based on the given time and environmental circumstances. To enable repeated training under diverse scenarios, an AR (Augmented Reality) and VR (Virtual Reality) combined simulator is under development. Unlike existing disaster response training, simulator based training (that allows remote login simultaneous multi-user training) enables freedom from limitations in time and space constraints, and can be repeatedly trained with different combinations of functions and disaster situations. There are related systems such as ADMS (Advanced Disaster Management Simulator) developed by ETC simulation and HLS2 (Homeland Security Simulation System) developed by ELBIT system. However, the ROK government needs a simulator custom made to the country's environment and disaster types, and also combines the latest information and communication technologies, which include AR, VR, and MPEG-DASH (Moving Picture Experts Group - Dynamic Adaptive Streaming over HTTP) technology. In this paper, a new disaster response training simulator is proposed to overcome the limitation of existing training systems, and adapted to actual disaster situations in the ROK, where several technical features are described.Keywords: augmented reality, emergency response training simulator, MPEG-DASH, virtual reality
Procedia PDF Downloads 301773 Predicting the Turbulence Intensity, Excess Energy Available and Potential Power Generated by Building Mounted Wind Turbines over Four Major UK City
Authors: Emejeamara Francis
Abstract:
The future of potentials wind energy applications within suburban/urban areas are currently faced with various problems. These include insufficient assessment of urban wind resource, and the effectiveness of commercial gust control solutions as well as unavailability of effective and cheaper valuable tools for scoping the potentials of urban wind applications within built-up environments. In order to achieve effective assessment of the potentials of urban wind installations, an estimation of the total energy that would be available to them were effective control systems to be used, and evaluating the potential power to be generated by the wind system is required. This paper presents a methodology of predicting the power generated by a wind system operating within an urban wind resource. This method was developed by using high temporal resolution wind measurements from eight potential sites within the urban and suburban environment as inputs to a vertical axis wind turbine multiple stream tube model. A relationship between the unsteady performance coefficient obtained from the stream tube model results and turbulence intensity was demonstrated. Hence, an analytical methodology for estimating the unsteady power coefficient at a potential turbine site is proposed. This is combined with analytical models that were developed to predict the wind speed and the excess energy (EEC) available in estimating the potential power generated by wind systems at different heights within a built environment. Estimates of turbulence intensities, wind speed, EEC and turbine performance based on the current methodology allow a more complete assessment of available wind resource and potential urban wind projects. This methodology is applied to four major UK cities namely Leeds, Manchester, London and Edinburgh and the potential to map the turbine performance at different heights within a typical urban city is demonstrated.Keywords: small-scale wind, turbine power, urban wind energy, turbulence intensity, excess energy content
Procedia PDF Downloads 277772 Walking the Tightrope: Balancing Project Governance, Complexity, and Servant Leadership for Megaproject Success
Authors: Muhammad Shoaib Iqbal, Shih Ping Ho
Abstract:
Megaprojects are large-scale, complex ventures with significant financial investments, numerous stakeholders, and extended timelines, requiring meticulous management for successful completion. This study explores the interplay between project governance, project complexity, and servant leadership and their combined effects on project success, specifically within the context of Pakistani megaprojects. The primary objectives are to examine the direct impact of project governance on project success, understand the negative influence of project complexity, assess the positive role of servant leadership, explore the moderating effect of servant leadership on the relationship between governance and success, and investigate how servant leadership mitigates the adverse effects of complexity. Using a quantitative approach, survey data were collected from project managers and team members involved in Pakistani megaprojects. Using a Comprehensive empirical model, 257 Valid responses were analyzed. Multiple regression analysis tested the hypothesized relationships and interaction effects using PLS-SEM. Findings reveal that project governance significantly enhances project success, emphasizing the need for robust governance structures. Conversely, project complexity negatively impacts success, highlighting the challenges of managing complex projects. Servant leadership significantly boosts project success by prioritizing team support and empowerment. Although the interaction between governance and servant leadership is not significant, suggesting no significant change in project success, servant leadership significantly mitigates the negative effects of project complexity, enhancing team resilience and adaptability. These results underscore the necessity for a balanced approach integrating strong governance with flexible, supportive leadership. The study offers valuable insights for practitioners, recommending adaptive governance frameworks and promoting servant leadership to improve the management and success rates of megaprojects. This research contributes to the broader understanding of effective project management practices in complex environments.Keywords: project governance, project complexity, servant leadership, project success, megaprojects, Pakistan
Procedia PDF Downloads 34771 Assessing the Impact of Heatwaves on Intertidal Mudflat Colonized by an Exotic Mussel
Authors: Marie Fouet, Olivier Maire, Cécile Masse, Hugues Blanchet, Salomé Coignard, Nicolas Lavesque, Guillaume Bernard
Abstract:
Exacerbated by global change, extreme climatic events such as atmospheric and marine heat waves may interact with the spread of non-indigenous species and their associated impacts on marine ecosystems. Since the 1970’s, the introduction of non-indigenous species due to oyster exchanges has been numerous. Among them, the Asian date mussel Arcuatula senhousia has colonized a large number of ecosystems worldwide (e.g., California, New Zealand, Italy). In these places, A.senhousia led to important habitat modifications in the benthic compartment through physical, biological, and biogeochemical effects associated with the development of dense mussel populations. In Arcachon Bay (France), a coastal lagoon of the French Atlantic and hotspot of oyster farming, abundances of A. senhousia recently increased, following a lag time of ca. 20 years since the first record of the species in 2002. Here, we addressed the potential effects of the interaction between A. senhousia invasion and heatwave intensity on ecosystem functioning within an intertidal mudflat. More precisely, two realistic intensities (“High” and “Severe”) of combined marine and atmospheric heatwaves have been simulated in an experimental tidal mesocosm system onto which naturally varying densities of A. senhousia and associated benthic communities were exposed in sediment cores collected in situ. Following a six-day exposure, community-scale responses were assessed by measuring benthic metabolism (oxygen and nutrient fluxes) in each core. Results show that besides significantly enhanced benthic metabolism with increasing heatwave intensity, mussel density clearly mediated the magnitude of the community-scale response, thereby highlighting the importance of understanding the interactive effects of environmental stressors co-occurring with non-indigenous species and their dependencies for a better assessment of their impacts.Keywords: arcuatula senhousia, benthic habitat, ecosystem functioning, heatwaves, metabolism
Procedia PDF Downloads 68