Search results for: combined alkaline-hydrothermal pretreatment
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2807

Search results for: combined alkaline-hydrothermal pretreatment

767 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging

Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland

Abstract:

A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.

Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography

Procedia PDF Downloads 139
766 Liquefaction Potential Assessment Using Screw Driving Testing and Microtremor Data: A Case Study in the Philippines

Authors: Arturo Daag

Abstract:

The Philippine Institute of Volcanology and Seismology (PHIVOLCS) is enhancing its liquefaction hazard map towards a detailed probabilistic approach using SDS and geophysical data. Target sites for liquefaction assessment are public schools in Metro Manila. Since target sites are in highly urbanized-setting, the objective of the project is to conduct both non-destructive geotechnical studies using Screw Driving Testing (SDFS) combined with geophysical data such as refraction microtremor array (ReMi), 3 component microtremor Horizontal to Vertical Spectral Ratio (HVSR), and ground penetrating RADAR (GPR). Initial test data was conducted in liquefaction impacted areas from the Mw 6.1 earthquake in Central Luzon last April 22, 2019 Province of Pampanga. Numerous accounts of liquefaction events were documented areas underlain by quaternary alluvium and mostly covered by recent lahar deposits. SDS estimated values showed a good correlation to actual SPT values obtained from available borehole data. Thus, confirming that SDS can be an alternative tool for liquefaction assessment and more efficient in terms of cost and time compared to SPT and CPT. Conducting borehole may limit its access in highly urbanized areas. In order to extend or extrapolate the SPT borehole data, non-destructive geophysical equipment was used. A 3-component microtremor obtains a subsurface velocity model in 1-D seismic shear wave velocity of the upper 30 meters of the profile (Vs30). For the ReMi, 12 geophone array with 6 to 8-meter spacing surveys were conducted. Microtremor data were computed through the Factor of Safety, which is the quotient of Cyclic Resistance Ratio (CRR) and Cyclic Stress Ratio (CSR). Complementary GPR was used to study the subsurface structure and used to inferred subsurface structures and groundwater conditions.

Keywords: screw drive testing, microtremor, ground penetrating RADAR, liquefaction

Procedia PDF Downloads 182
765 Seashore Debris Detection System Using Deep Learning and Histogram of Gradients-Extractor Based Instance Segmentation Model

Authors: Anshika Kankane, Dongshik Kang

Abstract:

Marine debris has a significant influence on coastal environments, damaging biodiversity, and causing loss and damage to marine and ocean sector. A functional cost-effective and automatic approach has been used to look up at this problem. Computer vision combined with a deep learning-based model is being proposed to identify and categorize marine debris of seven kinds on different beach locations of Japan. This research compares state-of-the-art deep learning models with a suggested model architecture that is utilized as a feature extractor for debris categorization. The model is being proposed to detect seven categories of litter using a manually constructed debris dataset, with the help of Mask R-CNN for instance segmentation and a shape matching network called HOGShape, which can then be cleaned on time by clean-up organizations using warning notifications of the system. The manually constructed dataset for this system is created by annotating the images taken by fixed KaKaXi camera using CVAT annotation tool with seven kinds of category labels. A pre-trained HOG feature extractor on LIBSVM is being used along with multiple templates matching on HOG maps of images and HOG maps of templates to improve the predicted masked images obtained via Mask R-CNN training. This system intends to timely alert the cleanup organizations with the warning notifications using live recorded beach debris data. The suggested network results in the improvement of misclassified debris masks of debris objects with different illuminations, shapes, viewpoints and litter with occlusions which have vague visibility.

Keywords: computer vision, debris, deep learning, fixed live camera images, histogram of gradients feature extractor, instance segmentation, manually annotated dataset, multiple template matching

Procedia PDF Downloads 88
764 Ultrasound Therapy: Amplitude Modulation Technique for Tissue Ablation by Acoustic Cavitation

Authors: Fares A. Mayia, Mahmoud A. Yamany, Mushabbab A. Asiri

Abstract:

In recent years, non-invasive Focused Ultrasound (FU) has been utilized for generating bubbles (cavities) to ablate target tissue by mechanical fractionation. Intensities >10 kW/cm² are required to generate the inertial cavities. The generation, rapid growth, and collapse of these inertial cavities cause tissue fractionation and the process is called Histotripsy. The ability to fractionate tissue from outside the body has many clinical applications including the destruction of the tumor mass. The process of tissue fractionation leaves a void at the treated site, where all the affected tissue is liquefied to particles at sub-micron size. The liquefied tissue will eventually be absorbed by the body. Histotripsy is a promising non-invasive treatment modality. This paper presents a technique for generating inertial cavities at lower intensities (< 1 kW/cm²). The technique (patent pending) is based on amplitude modulation (AM), whereby a low frequency signal modulates the amplitude of a higher frequency FU wave. Cavitation threshold is lower at low frequencies; the intensity required to generate cavitation in water at 10 kHz is two orders of magnitude lower than the intensity at 1 MHz. The Amplitude Modulation technique can operate in both continuous wave (CW) and pulse wave (PW) modes, and the percentage modulation (modulation index) can be varied from 0 % (thermal effect) to 100 % (cavitation effect), thus allowing a range of ablating effects from Hyperthermia to Histotripsy. Furthermore, changing the frequency of the modulating signal allows controlling the size of the generated cavities. Results from in vitro work demonstrate the efficacy of the new technique in fractionating soft tissue and solid calcium carbonate (Chalk) material. The technique, when combined with MR or Ultrasound imaging, will present a precise treatment modality for ablating diseased tissue without affecting the surrounding healthy tissue.

Keywords: focused ultrasound therapy, histotripsy, inertial cavitation, mechanical tissue ablation

Procedia PDF Downloads 301
763 Minimization of the Abrasion Effect of Fiber Reinforced Polymer Matrix on Stainless Steel Injection Nozzle through the Application of Laser Hardening Technique

Authors: Amessalu Atenafu Gelaw, Nele Rath

Abstract:

Currently, laser hardening process is becoming among the most efficient and effective hardening technique due to its significant advantages. The source where heat is generated, the absence of cooling media, self-quenching property, less distortion nature due to localized heat input, environmental friendly behavior and less time to finish the operation are among the main benefits to adopt this technology. This day, a variety of injection machines are used in plastic, textile, electrical and mechanical industries. Due to the fast growing of composite technology, fiber reinforced polymer matrix becoming optional solution to use in these industries. Due, to the abrasion nature of fiber reinforced polymer matrix composite on the injection components, many parts are outdated before the design period. Niko, a company specialized in injection molded products, suffers from the short lifetime of the injection nozzles of the molds, due to the use of fiber reinforced and, therefore, more abrasive polymer matrix. To prolong the lifetime of these molds, hardening the susceptible component like the injecting nozzles was a must. In this paper, the laser hardening process is investigated on Unimax, a type of stainless steel. The investigation to get optimal results for the nozzle-case was performed in three steps. First, the optimal parameters for maximum possible hardenability for the investigated nozzle material is investigated on a flat sample, using experimental testing as well as thermal simulation. Next, the effect of an inclination on the maximum temperature is analyzed both by experimental testing and validation through simulation. Finally, the data combined and applied for the nozzle. This paper describes possible strategies and methods for laser hardening of the nozzle to reach hardness of at least 720 HV for the material investigated. It has been proven, that the nozzle can be laser hardened to over 900 HV with the option of even higher results when more precise positioning of the laser can be assured.

Keywords: absorptivity, fiber reinforced matrix, laser hardening, Nd:YAG laser

Procedia PDF Downloads 144
762 Comparison Of Virtual Non-Contrast To True Non-Contrast Images Using Dual Layer Spectral Computed Tomography

Authors: O’Day Luke

Abstract:

Purpose: To validate virtual non-contrast reconstructions generated from dual-layer spectral computed tomography (DL-CT) data as an alternative for the acquisition of a dedicated true non-contrast dataset during multiphase contrast studies. Material and methods: Thirty-three patients underwent a routine multiphase clinical CT examination, using Dual-Layer Spectral CT, from March to August 2021. True non-contrast (TNC) and virtual non-contrast (VNC) datasets, generated from both portal venous and arterial phase imaging were evaluated. For every patient in both true and virtual non-contrast datasets, a region-of-interest (ROI) was defined in aorta, liver, fluid (i.e. gallbladder, urinary bladder), kidney, muscle, fat and spongious bone, resulting in 693 ROIs. Differences in attenuation for VNC and TNV images were compared, both separately and combined. Consistency between VNC reconstructions obtained from the arterial and portal venous phase was evaluated. Results: Comparison of CT density (HU) on the VNC and TNC images showed a high correlation. The mean difference between TNC and VNC images (excluding bone results) was 5.5 ± 9.1 HU and > 90% of all comparisons showed a difference of less than 15 HU. For all tissues but spongious bone, the mean absolute difference between TNC and VNC images was below 10 HU. VNC images derived from the arterial and the portal venous phase showed a good correlation in most tissue types. The aortic attenuation was somewhat dependent however on which dataset was used for reconstruction. Bone evaluation with VNC datasets continues to be a problem, as spectral CT algorithms are currently poor in differentiating bone and iodine. Conclusion: Given the increasing availability of DL-CT and proven accuracy of virtual non-contrast processing, VNC is a promising tool for generating additional data during routine contrast-enhanced studies. This study shows the utility of virtual non-contrast scans as an alternative for true non-contrast studies during multiphase CT, with potential for dose reduction, without loss of diagnostic information.

Keywords: dual-layer spectral computed tomography, virtual non-contrast, true non-contrast, clinical comparison

Procedia PDF Downloads 127
761 A New Binder Mineral for Cement Stabilized Road Pavements Soils

Authors: Aydın Kavak, Özkan Coruk, Adnan Aydıner

Abstract:

Long-term performance of pavement structures is significantly impacted by the stability of the underlying soils. In situ subgrades often do not provide enough support required to achieve acceptable performance under traffic loading and environmental demands. NovoCrete® is a powder binder-mineral for cement stabilized road pavements soils. NovoCrete® combined with Portland cement at optimum water content increases the crystallize formations during the hydration process, resulting in higher strengths, neutralizes pH levels, and provides water impermeability. These changes in soil properties may lead to transforming existing unsuitable in-situ materials into suitable fill materials. The main features of NovoCrete® are: They are applicable to all types of soil, reduce premature cracking and improve soil properties, creating base and subbase course layers with high bearing capacity by reducing hazardous materials. It can be used also for stabilization of recyclable aggregates and old asphalt pavement aggregate, etc. There are many applications in Germany, Turkey, India etc. In this paper, a few field application in Turkey will be discussed. In the road construction works, this binder material is used for cement stabilization works. In the applications 120-180 kg cement is used for 1 m3 of soil with a 2 % of binder NovoCrete® material for the stabilization. The results of a plate loading test in a road construction site show 1 mm deformation which is very small under 7 kg/cm2 loading. The modulus of subgrade reaction increase from 611 MN/m3 to 3673 MN/m3.The soaked CBR values for stabilized soils increase from 10-20 % to 150-200 %. According to these data weak subgrade soil can be used as a base or sub base after the modification. The potential reduction in the need for quarried materials will help conserve natural resources. The use of on-site or nearby materials in fills, will significantly reduce transportation costs and provide both economic and environmental benefits.

Keywords: soil, stabilization, cement, binder, Novocrete, additive

Procedia PDF Downloads 206
760 The Integrated Methodological Development of Reliability, Risk and Condition-Based Maintenance in the Improvement of the Thermal Power Plant Availability

Authors: Henry Pariaman, Iwa Garniwa, Isti Surjandari, Bambang Sugiarto

Abstract:

Availability of a complex system of thermal power plant is strongly influenced by the reliability of spare parts and maintenance management policies. A reliability-centered maintenance (RCM) technique is an established method of analysis and is the main reference for maintenance planning. This method considers the consequences of failure in its implementation, but does not deal with further risk of down time that associated with failures, loss of production or high maintenance costs. Risk-based maintenance (RBM) technique provides support strategies to minimize the risks posed by the failure to obtain maintenance task considering cost effectiveness. Meanwhile, condition-based maintenance (CBM) focuses on monitoring the application of the conditions that allow the planning and scheduling of maintenance or other action should be taken to avoid the risk of failure prior to the time-based maintenance. Implementation of RCM, RBM, CBM alone or combined RCM and RBM or RCM and CBM is a maintenance technique used in thermal power plants. Implementation of these three techniques in an integrated maintenance will increase the availability of thermal power plants compared to the use of maintenance techniques individually or in combination of two techniques. This study uses the reliability, risks and conditions-based maintenance in an integrated manner to increase the availability of thermal power plants. The method generates MPI (Priority Maintenance Index) is RPN (Risk Priority Number) are multiplied by RI (Risk Index) and FDT (Failure Defense Task) which can generate the task of monitoring and assessment of conditions other than maintenance tasks. Both MPI and FDT obtained from development of functional tree, failure mode effects analysis, fault-tree analysis, and risk analysis (risk assessment and risk evaluation) were then used to develop and implement a plan and schedule maintenance, monitoring and assessment of the condition and ultimately perform availability analysis. The results of this study indicate that the reliability, risks and conditions-based maintenance methods, in an integrated manner can increase the availability of thermal power plants.

Keywords: integrated maintenance techniques, availability, thermal power plant, MPI, FDT

Procedia PDF Downloads 778
759 DNA-Polycation Condensation by Coarse-Grained Molecular Dynamics

Authors: Titus A. Beu

Abstract:

Many modern gene-delivery protocols rely on condensed complexes of DNA with polycations to introduce the genetic payload into cells by endocytosis. In particular, polyethyleneimine (PEI) stands out by a high buffering capacity (enabling the efficient condensation of DNA) and relatively simple fabrication. Realistic computational studies can offer essential insights into the formation process of DNA-PEI polyplexes, providing hints on efficient designs and engineering routes. We present comprehensive computational investigations of solvated PEI and DNA-PEI polyplexes involving calculations at three levels: ab initio, all-atom (AA), and coarse-grained (CG) molecular mechanics. In the first stage, we developed a rigorous AA CHARMM (Chemistry at Harvard Macromolecular Mechanics) force field (FF) for PEI on the basis of accurate ab initio calculations on protonated model pentamers. We validated this atomistic FF by matching the results of extensive molecular dynamics (MD) simulations of structural and dynamical properties of PEI with experimental data. In a second stage, we developed a CG MARTINI FF for PEI by Boltzmann inversion techniques from bead-based probability distributions obtained from AA simulations and ensuring an optimal match between the AA and CG structural and dynamical properties. In a third stage, we combined the developed CG FF for PEI with the standard MARTINI FF for DNA and performed comprehensive CG simulations of DNA-PEI complex formation and condensation. Various technical aspects which are crucial for the realistic modeling of DNA-PEI polyplexes, such as options of treating electrostatics and the relevance of polarizable water models, are discussed in detail. Massive CG simulations (with up to 500 000 beads) shed light on the mechanism and provide time scales for DNA polyplex formation independence of PEI chain size and protonation pattern. The DNA-PEI condensation mechanism is shown to primarily rely on the formation of DNA bundles, rather than by changes of the DNA-strand curvature. The gained insights are expected to be of significant help for designing effective gene-delivery applications.

Keywords: DNA condensation, gene-delivery, polyethylene-imine, molecular dynamics.

Procedia PDF Downloads 102
758 Agricultural Organized Areas Approach for Resilience to Droughts, Nutrient Cycle and Rural and Wild Fires

Authors: Diogo Pereira, Maria Moura, Joana Campos, João Nunes

Abstract:

As the Ukraine war highlights the European Economic Area’s vulnerability and external dependence on feed and food, agriculture gains significant importance. Transformative change is necessary to reach a sustainable and resilient agricultural sector. Agriculture is an important drive for bioeconomy and the equilibrium and survival of society and rural fires resilience. The pressure of (1) water stress, (2) nutrient cycle, and (3) social demographic evolution towards 70% of the population in Urban systems and the aging of the rural population, combined with climate change, exacerbates the problem and paradigm of rural and wildfires, especially in Portugal. The Portuguese territory is characterized by (1) 28% of marginal land, (2) the soil quality of 70% of the territory not being appropriate for agricultural activity, (3) a micro smallholding, with less than 1 ha per proprietor, with mainly familiar and traditional agriculture in the North and Centre regions, and (4) having the most vulnerable areas for rural fires in these same regions. The most important difference between the South, North and Centre of Portugal, referring to rural and wildfires, is the agricultural activity, which has a higher level in the South. In Portugal, rural and wildfires represent an average annual economic loss of around 800 to 1000 million euros. The WinBio model is an agrienvironmental metabolism design, with the capacity to create a new agri-food metabolism through Agricultural Organized Areas, a privatepublic partnership. This partnership seeks to grow agricultural activity in regions with (1) abandoned territory, (2) micro smallholding, (3) water and nutrient management necessities, and (4) low agri-food literacy. It also aims to support planning and monitoring of resource use efficiency and sustainability of territories, using agriculture as a barrier for rural and wildfires in order to protect rural population.

Keywords: agricultural organized areas, residues, climate change, drought, nutrients, rural and wild fires

Procedia PDF Downloads 56
757 Combined Tarsal Coalition Resection and Arthroereisis in Treatment of Symptomatic Rigid Flat Foot in Pediatric Population

Authors: Michael Zaidman, Naum Simanovsky

Abstract:

Introduction. Symptomatic tarsal coalition with rigid flat foot often demands operative solution. An isolated coalition resection does not guarantee pain relief; correction of co-existing foot deformity may be required. The objective of the study was to analyze the results of combination of tarsal coalition resection and arthroereisis. Patients and methods. We retrospectively reviewed medical records and radiographs of children operatively treated in our institution for symptomatic calcaneonavicular or talocalcaneal coalition between the years 2019 and 2022. Eight patients (twelve feet), 4 boys and 4 girls with mean age 11.2 years, were included in the study. In six patients (10 feet) calcaneonavicular coalition was diagnosed, two patients (two feet) sustained talonavicular coalition. To quantify degrees of foot deformity, we used calcaneal pitch angle, lateral talar-first metatarsal (Meary's) angle, and talonavicular coverage angle. The clinical results were assessed using the American Orthopaedic Foot and Ankle Society (AOFAS) Ankle Hindfoot Score. Results. The mean follow-up was 28 month. The preoperative mean talonavicular coverage angle was 17,75º as compared with postoperative mean angle of 5.4º. The calcaneal pitch angle improved from mean 6,8º to 16,4º. The mean preoperative Meary’s angle of -11.3º improved to mean 2.8º. The preoperative mean AOFAS score improved from 54.7 to 93.1 points post-operatively. In nine of twelve feet, overall clinical outcome judged by AOFAS scale was excellent (90-100 points), in three feet was good (80-90 points). Six patients (ten feet) obviously improved their subtalar range of motion. Conclusion. For symptomatic stiff or rigid flat feet associated with tarsal coalition, the combination of coalition resection and arthroereisis leads to normalization of radiographic parameters, clinical and functional improvement with good patient’s satisfaction and likely to be more effective than the isolated procedures.

Keywords: rigid flat foot, tarsal coalition resection, arthroereisis, outcome

Procedia PDF Downloads 50
756 Investigating the Motion of a Viscous Droplet in Natural Convection Using the Level Set Method

Authors: Isadora Bugarin, Taygoara F. de Oliveira

Abstract:

Binary fluids and emulsions, in general, are present in a vast range of industrial, medical, and scientific applications, showing complex behaviors responsible for defining the flow dynamics and the system operation. However, the literature describing those highlighted fluids in non-isothermal models is currently still limited. The present work brings a detailed investigation on droplet migration due to natural convection in square enclosure, aiming to clarify the effects of drop viscosity on the flow dynamics by showing how distinct viscosity ratios (droplet/ambient fluid) influence the drop motion and the final movement pattern kept on stationary regimes. The analysis was taken by observing distinct combinations of Rayleigh number, drop initial position, and viscosity ratios. The Navier-Stokes and Energy equations were solved considering the Boussinesq approximation in a laminar flow using the finite differences method combined with the Level Set method for binary flow solution. Previous results collected by the authors showed that the Rayleigh number and the drop initial position affect drastically the motion pattern of the droplet. For Ra ≥ 10⁴, two very marked behaviors were observed accordingly with the initial position: the drop can travel either a helical path towards the center or a cyclic circular path resulting in a closed cycle on the stationary regime. The variation of viscosity ratio showed a significant alteration of pattern, exposing a large influence on the droplet path, capable of modifying the flow’s behavior. Analyses on viscosity effects on the flow’s unsteady Nusselt number were also performed. Among the relevant contributions proposed in this work is the potential use of the flow initial conditions as a mechanism to control the droplet migration inside the enclosure.

Keywords: binary fluids, droplet motion, level set method, natural convection, viscosity

Procedia PDF Downloads 100
755 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 102
754 A Cooperative Signaling Scheme for Global Navigation Satellite Systems

Authors: Keunhong Chae, Seokho Yoon

Abstract:

Recently, the global navigation satellite system (GNSS) such as Galileo and GPS is employing more satellites to provide a higher degree of accuracy for the location service, thus calling for a more efficient signaling scheme among the satellites used in the overall GNSS network. In that the network throughput is improved, the spatial diversity can be one of the efficient signaling schemes; however, it requires multiple antenna that could cause a significant increase in the complexity of the GNSS. Thus, a diversity scheme called the cooperative signaling was proposed, where the virtual multiple-input multiple-output (MIMO) signaling is realized with using only a single antenna in the transmit satellite of interest and with modeling the neighboring satellites as relay nodes. The main drawback of the cooperative signaling is that the relay nodes receive the transmitted signal at different time instants, i.e., they operate in an asynchronous way, and thus, the overall performance of the GNSS network could degrade severely. To tackle the problem, several modified cooperative signaling schemes were proposed; however, all of them are difficult to implement due to a signal decoding at the relay nodes. Although the implementation at the relay nodes could be simpler to some degree by employing the time-reversal and conjugation operations instead of the signal decoding, it would be more efficient if we could implement the operations of the relay nodes at the source node having more resources than the relay nodes. So, in this paper, we propose a novel cooperative signaling scheme, where the data signals are combined in a unique way at the source node, thus obviating the need of the complex operations such as signal decoding, time-reversal and conjugation at the relay nodes. The numerical results confirm that the proposed scheme provides the same performance in the cooperative diversity and the bit error rate (BER) as the conventional scheme, while reducing the complexity at the relay nodes significantly. Acknowledgment: This work was supported by the National GNSS Research Center program of Defense Acquisition Program Administration and Agency for Defense Development.

Keywords: global navigation satellite network, cooperative signaling, data combining, nodes

Procedia PDF Downloads 269
753 Development of a Two-Step 'Green' Process for (-) Ambrafuran Production

Authors: Lucia Steenkamp, Chris V. D. Westhuyzen, Kgama Mathiba

Abstract:

Ambergris, and more specifically its oxidation product (–)-ambrafuran, is a scarce, valuable, and sought-after perfumery ingredient. The material is used as a fixative agent to stabilise perfumes in formulations by reducing the evaporation rate of volatile substances. Ambergris is a metabolic product of the sperm whale (Physeter macrocephatus L.), resulting from intestinal irritation. Chemically, (–)-ambrafuran is produced from the natural product sclareol in eight synthetic steps – in the process using harsh and often toxic chemicals to do so. An overall yield of no more than 76% can be achieved in some routes, but generally, this is lower. A new 'green' route has been developed in our laboratory in which sclareol, extracted from the Clary sage plant, is converted to (–)-ambrafuran in two steps with an overall yield in excess of 80%. The first step uses a microorganism, Hyphozyma roseoniger, to bioconvert sclareol to an intermediate diol using substrate concentrations up to 50g/L. The yield varies between 90 and 67% depending on the substrate concentration used. The purity of the diol product is 95%, and the diol is used without further purification in the next step. The intermediate diol is then cyclodehydrated to the final product (–)-ambrafuran using a zeolite, which is not harmful to the environment and is readily recycled. The yield of the product is 96%, and following a single recrystallization, the purity of the product is > 99.5%. A preliminary LC-MS study of the bioconversion identified several intermediates produced in the fermentation broth under oxygen-restricted conditions. Initially, a short-lived ketone is produced in equilibrium with a more stable pyranol, a key intermediate in the process. The latter is oxidised under Norrish type I cleavage conditions to yield an acetate, which is hydrolysed either chemically or under lipase action to afford the primary fermentation product, an intermediate diol. All the intermediates identified point to the likely CYP450 action as the key enzyme(s) in the mechanism. This invention is an exceptional example of how the power of biocatalysis, combined with a mild, benign chemical step, can be deployed to replace a total chemical synthesis of a specific chiral antipode of a commercially relevant material.

Keywords: ambrafuran, biocatalysis, fragrance, microorganism

Procedia PDF Downloads 193
752 Effects of Branched-Chain Amino Acid Supplementation on Sarcopenic Patients with Liver Cirrhosis

Authors: Deepak Nathiya1, Ramesh Roop Rai, Pratima Singh1, Preeti Raj1, Supriya Suman, Balvir Singh Tomar

Abstract:

Background: Sarcopenia is a catabolic state in liver cirrhosis (LC), accelerated with a breakdown of skeletal muscle to release amino acids which adversely affects survival, health-related quality of life, and response to any underlying disease. The primary objective of the study was to investigate the long-term effect of branched-chain amino acids (BCAA) supplementations on parameters associated with improved prognosis in sarcopenic patients with LC, as well as to evaluate its impact on cirrhotic-related events. Methods: We carried out a 24 week, single-center, randomized, open-label, controlled, two cohort parallel-group intervention trial comparing the efficacy of BCAA against lactoalbumin (L-ALB) on 106 sarcopenic liver cirrhotics. The BCAA (intervention) group was treated with 7.2 g BCAA per whereas, the lactoalbumin group was also given 6.3 g of L-Albumin. The primary outcome was to assess the impact of BCAA on parameters of sarcopenia: muscle mass, muscle strength, and physical performance. The secondary outcomes were to study combined survival and maintenance of liver function changes in laboratory and clinical markers in the duration of six months. Results: Treatment with BCAA leads to significant improvement in sarcopenic parameters: muscle strength, muscle function, and muscle mass. The total cirrhotic-related complications and cumulative event-free survival occurred fewer in the BCAA group than in the L-ALB group. Prognostic markers also improved significantly in the study. Conclusion: The current clinical trial demonstrated that long-term BCAAs supplementation improved sarcopenia and prognostic markers in patients with advanced liver cirrhosis.

Keywords: sarcopenia, liver cirrhosis, BCAA, quality of life

Procedia PDF Downloads 123
751 Development of an Optimised, Automated Multidimensional Model for Supply Chains

Authors: Safaa H. Sindi, Michael Roe

Abstract:

This project divides supply chain (SC) models into seven Eras, according to the evolution of the market’s needs throughout time. The five earliest Eras describe the emergence of supply chains, while the last two Eras are to be created. Research objectives: The aim is to generate the two latest Eras with their respective models that focus on the consumable goods. Era Six contains the Optimal Multidimensional Matrix (OMM) that incorporates most characteristics of the SC and allocates them into four quarters (Agile, Lean, Leagile, and Basic SC). This will help companies, especially (SMEs) plan their optimal SC route. Era Seven creates an Automated Multidimensional Model (AMM) which upgrades the matrix of Era six, as it accounts for all the supply chain factors (i.e. Offshoring, sourcing, risk) into an interactive system with Heuristic Learning that helps larger companies and industries to select the best SC model for their market. Methodologies: The data collection is based on a Fuzzy-Delphi study that analyses statements using Fuzzy Logic. The first round of Delphi study will contain statements (fuzzy rules) about the matrix of Era six. The second round of Delphi contains the feedback given from the first round and so on. Preliminary findings: both models are applicable, Matrix of Era six reduces the complexity of choosing the best SC model for SMEs by helping them identify the best strategy of Basic SC, Lean, Agile and Leagile SC; that’s tailored to their needs. The interactive heuristic learning in the AMM of Era seven will help mitigate error and aid large companies to identify and re-strategize the best SC model and distribution system for their market and commodity, hence increasing efficiency. Potential contributions to the literature: The problematic issue facing many companies is to decide which SC model or strategy to incorporate, due to the many models and definitions developed over the years. This research simplifies this by putting most definition in a template and most models in the Matrix of era six. This research is original as the division of SC into Eras, the Matrix of Era six (OMM) with Fuzzy-Delphi and Heuristic Learning in the AMM of Era seven provides a synergy of tools that were not combined before in the area of SC. Additionally the OMM of Era six is unique as it combines most characteristics of the SC, which is an original concept in itself.

Keywords: Leagile, automation, heuristic learning, supply chain models

Procedia PDF Downloads 376
750 Processing and Characterization of Aluminum Matrix Composite Reinforced with Amorphous Zr₃₇.₅Cu₁₈.₆₇Al₄₃.₉₈ Phase

Authors: P. Abachi, S. Karami, K. Purazrang

Abstract:

The amorphous reinforcements (metallic glasses) can be considered as promising options for reinforcing light-weight aluminum and its alloys. By using the proper type of reinforcement, one can overcome to drawbacks such as interfacial de-cohesion and undesirable reactions which can be created at ceramic particle and metallic matrix interface. In this work, the Zr-based amorphous phase was produced via mechanical milling of elemental powders. Based on Miedema semi-empirical Model and diagrams for formation enthalpies and/or Gibbs free energies of Zr-Cu amorphous phase in comparison with the crystalline phase, the glass formability range was predicted. The composite was produced using the powder mixture of the aluminum and metallic glass and spark plasma sintering (SPS) at the temperature slightly above the glass transition Tg of the metallic glass particles. The selected temperature and rapid sintering route were suitable for consolidation of an aluminum matrix without crystallization of amorphous phase. To characterize amorphous phase formation, X-ray diffraction (XRD) phase analyses were performed on powder mixture after specified intervals of milling. The microstructure of the composite was studied by optical and scanning electron microscope (SEM). Uniaxial compression tests were carried out on composite specimens with the dimension of 4 mm long and a cross-section of 2 ˟ 2mm2. The micrographs indicated an appropriate reinforcement distribution in the metallic matrix. The comparison of stress–strain curves of the consolidated composite and the non-reinforced Al matrix alloy in compression showed that the enhancement of yield strength and mechanical strength are combined with an appreciable plastic strain at fracture. It can be concluded that metallic glasses (amorphous phases) are alternative reinforcement material for lightweight metal matrix composites capable of producing high strength and adequate ductility. However, this is in the expense of minor density increase.

Keywords: aluminum matrix composite, amorphous phase, mechanical alloying, spark plasma sintering

Procedia PDF Downloads 346
749 A Remote Sensing Approach to Estimate the Paleo-Discharge of the Lost Saraswati River of North-West India

Authors: Zafar Beg, Kumar Gaurav

Abstract:

The lost Saraswati is described as a large perennial river which was 'lost' in the desert towards the end of the Indus-Saraswati civilisation. It has been proposed earlier that the lost Saraswati flowed in the Sutlej-Yamuna interfluve, parallel to the present day Indus River. It is believed that one of the earliest known ancient civilizations, the 'Indus-Saraswati civilization' prospered along the course of the Saraswati River. The demise of the Indus civilization is considered to be due to desiccation of the river. Today in the Sutlej-Yamuna interfluve, we observe an ephemeral river, known as Ghaggar. It is believed that along with the Ghaggar River, two other Himalayan Rivers Sutlej and Yamuna were tributaries of the lost Saraswati and made a significant contribution to its discharge. Presence of a large number of archaeological sites and the occurrence of thick fluvial sand bodies in the subsurface in the Sutlej-Yamuna interfluve has been used to suggest that the Saraswati River was a large perennial river. Further, the wider course of about 4-7 km recognized from satellite imagery of Ghaggar-Hakra belt in between Suratgarh and Anupgarh strengthens this hypothesis. Here we develop a methodology to estimate the paleo discharge and paleo width of the lost Saraswati River. In doing so, we rely on the hypothesis which suggests that the ancient Saraswati River used to carry the combined flow or some part of the Yamuna, Sutlej and Ghaggar catchments. We first established a regime relationship between the drainage area-channel width and catchment area-discharge of 29 different rivers presently flowing on the Himalayan Foreland from Indus in the west to the Brahmaputra in the East. We found the width and discharge of all the Himalayan rivers scale in a similar way when they are plotted against their corresponding catchment area. Using these regime curves, we calculate the width and discharge of paleochannels originating from the Sutlej, Yamuna and Ghaggar rivers by measuring their corresponding catchment area from satellite images. Finally, we add the discharge and width obtained from each of the individual catchments to estimate the paleo width and paleo discharge respectively of the Saraswati River. Our regime curves provide a first-order estimate of the paleo discharge of the lost Saraswati.

Keywords: Indus civilization, palaeochannel, regime curve, Saraswati River

Procedia PDF Downloads 166
748 Comparison of Regional and Local Indwelling Catheter Techniques to Prolong Analgesia in Total Knee Arthroplasty Procedures: Continuous Peripheral Nerve Block and Continuous Periarticular Infiltration

Authors: Jared Cheves, Amanda DeChent, Joyce Pan

Abstract:

Total knee replacements (TKAs) are one of the most common but painful surgical procedures performed in the United States. Currently, the gold standard for postoperative pain management is the utilization of opioids. However, in the wake of the opioid epidemic, the healthcare system is attempting to reduce opioid consumption by trialing innovative opioid sparing analgesic techniques such as continuous peripheral nerve blocks (CPNB) and continuous periarticular infiltration (CPAI). The alleviation of pain, particularly during the first 72 hours postoperatively, is of utmost importance due to its association with delayed recovery, impaired rehabilitation, immunosuppression, the development of chronic pain, the development of rebound pain, and decreased patient satisfaction. While both CPNB and CPAI are being used today, there is limited evidence comparing the two to the current standard of care or to each other. An extensive literature review was performed to explore the safety profiles and effectiveness of CPNB and CPAI in reducing reported pain scores and decreasing opioid consumption. The literature revealed the usage of CPNB contributed to lower pain scores and decreased opioid use when compared to opioid-only control groups. Additionally, CPAI did not improve pain scores or decrease opioid consumption when combined with a multimodal analgesic (MMA) regimen. When comparing CPNB and CPAI to each other, neither unanimously lowered pain scores to a greater degree, but the literature indicates that CPNB decreased opioid consumption more than CPAI. More research is needed to further cement the efficacy of CPNB and CPAI as standard components of MMA in TKA procedures. In addition, future research can also focus on novel catheter-free applications to reduce the complications of continuous catheter analgesics.

Keywords: total knee arthroplasty, continuous peripheral nerve blocks, continuous periarticular infiltration, opioid, multimodal analgesia

Procedia PDF Downloads 72
747 Approaching the Spatial Multi-Objective Land Use Planning Problems at Mountain Areas by a Hybrid Meta-Heuristic Optimization Technique

Authors: Konstantinos Tolidis

Abstract:

The mountains are amongst the most fragile environments in the world. The world’s mountain areas cover 24% of the Earth’s land surface and are home to 12% of the global population. A further 14% of the global population is estimated to live in the vicinity of their surrounding areas. As urbanization continues to increase in the world, the mountains are also key centers for recreation and tourism; their attraction is often heightened by their remarkably high levels of biodiversity. Due to the fact that the features in mountain areas vary spatially (development degree, human geography, socio-economic reality, relations of dependency and interaction with other areas-regions), the spatial planning on these areas consists of a crucial process for preserving the natural, cultural and human environment and consists of one of the major processes of an integrated spatial policy. This research has been focused on the spatial decision problem of land use allocation optimization which is an ordinary planning problem on the mountain areas. It is a matter of fact that such decisions must be made not only on what to do, how much to do, but also on where to do, adding a whole extra class of decision variables to the problem when combined with the consideration of spatial optimization. The utility of optimization as a normative tool for spatial problem is widely recognized. However, it is very difficult for planners to quantify the weights of the objectives especially when these are related to mountain areas. Furthermore, the land use allocation optimization problems at mountain areas must be addressed not only by taking into account the general development objectives but also the spatial objectives (e.g. compactness, compatibility and accessibility, etc). Therefore, the main research’s objective was to approach the land use allocation problem by utilizing a hybrid meta-heuristic optimization technique tailored to the mountain areas’ spatial characteristics. The results indicates that the proposed methodological approach is very promising and useful for both generating land use alternatives for further consideration in land use allocation decision-making and supporting spatial management plans at mountain areas.

Keywords: multiobjective land use allocation, mountain areas, spatial planning, spatial decision making, meta-heuristic methods

Procedia PDF Downloads 315
746 Multi-Stage Optimization of Local Environmental Quality by Comprehensive Computer Simulated Person as Sensor for Air Conditioning Control

Authors: Sung-Jun Yoo, Kazuhide Ito

Abstract:

In this study, a comprehensive computer simulated person (CSP) that integrates computational human model (virtual manikin) and respiratory tract model (virtual airway), was applied for estimation of indoor environmental quality. Moreover, an inclusive prediction method was established by integrating computational fluid dynamics (CFD) analysis with advanced CSP which is combined with physiologically-based pharmacokinetic (PBPK) model, unsteady thermoregulation model for analysis targeting micro-climate around human body and respiratory area with high accuracy. This comprehensive method can estimate not only the contaminant inhalation but also constant interaction in the contaminant transfer between indoor spaces, i.e., a target area for indoor air quality (IAQ) assessment, and respiratory zone for health risk assessment. This study focused on the usage of the CSP as an air/thermal quality sensor in indoors, which means the application of comprehensive model for assessment of IAQ and thermal environmental quality. Demonstrative analysis was performed in order to examine the applicability of the comprehensive model to the heating, ventilation, air conditioning (HVAC) control scheme. CSP was located at the center of the simple model room which has dimension of 3m×3m×3m. Formaldehyde which is generated from floor material was assumed as a target contaminant, and flow field, sensible/latent heat and contaminant transfer analysis in indoor space were conducted by using CFD simulation coupled with CSP. In this analysis, thermal comfort was evaluated by thermoregulatory analysis, and respiratory exposure risks represented by adsorption flux/concentration at airway wall surface were estimated by PBPK-CFD hybrid analysis. These Analysis results concerning IAQ and thermal comfort will be fed back to the HVAC control and could be used to find a suitable ventilation rate and energy requirement for air conditioning system.

Keywords: CFD simulation, computer simulated person, HVAC control, indoor environmental quality

Procedia PDF Downloads 350
745 Study of the Relationship between the Civil Engineering Parameters and the Floating of Buoy Model Which Made from Expanded Polystyrene-Mortar

Authors: Panarat Saengpanya

Abstract:

There were five objectives in this study including the study of housing type with water environment, the physical and mechanical properties of the buoy material, the mechanical properties of the buoy models, the floating of the buoy models and the relationship between the civil engineering parameters and the floating of the buoy. The buoy examples made from Expanded Polystyrene (EPS) covered by 5 mm thickness of mortar with the equal thickness on each side. Specimens are 0.05 m cubes tested at a displacement rate of 0.005 m/min. The existing test method used to assess the parameters relationship is ASTM C 109 to provide comparative results. The results found that the three type of housing with water environment were Stilt Houses, Boat House, and Floating House. EPS is a lightweight material that has been used in engineering applications since at least the 1950s. Its density is about a hundredth of that of mortar, while the mortar strength was found 72 times of EPS. One of the advantage of composite is that two or more materials could be combined to take advantage of the good characteristics of each of the material. The strength of the buoy influenced by mortar while the floating influenced by EPS. Results showed the buoy example compressed under loading. The Stress-Strain curve showed the high secant modulus before reached the peak value. The failure occurred within 10% strain then the strength reduces while the strain was continuing. It was observed that the failure strength reduced by increasing the total volume of examples. For the buoy examples with same area, an increase of the failure strength is found when the high dimension is increased. The results showed the relationship between five parameters including the floating level, the bearing capacity, the volume, the high dimension and the unit weight. The study found increases in high of buoy lead to corresponding decreases in both modulus and compressive strength. The total volume and the unit weight had relationship with the bearing capacity of the buoy.

Keywords: floating house, buoy, floating structure, EPS

Procedia PDF Downloads 126
744 Pavement Management for a Metropolitan Area: A Case Study of Montreal

Authors: Luis Amador Jimenez, Md. Shohel Amin

Abstract:

Pavement performance models are based on projections of observed traffic loads, which makes uncertain to study funding strategies in the long run if history does not repeat. Neural networks can be used to estimate deterioration rates but the learning rate and momentum have not been properly investigated, in addition, economic evolvement could change traffic flows. This study addresses both issues through a case study for roads of Montreal that simulates traffic for a period of 50 years and deals with the measurement error of the pavement deterioration model. Travel demand models are applied to simulate annual average daily traffic (AADT) every 5 years. Accumulated equivalent single axle loads (ESALs) are calculated from the predicted AADT and locally observed truck distributions combined with truck factors. A back propagation Neural Network (BPN) method with a Generalized Delta Rule (GDR) learning algorithm is applied to estimate pavement deterioration models capable of overcoming measurement errors. Linear programming of lifecycle optimization is applied to identify M&R strategies that ensure good pavement condition while minimizing the budget. It was found that CAD 150 million is the minimum annual budget to good condition for arterial and local roads in Montreal. Montreal drivers prefer the use of public transportation for work and education purposes. Vehicle traffic is expected to double within 50 years, ESALS are expected to double the number of ESALs every 15 years. Roads in the island of Montreal need to undergo a stabilization period for about 25 years, a steady state seems to be reached after.

Keywords: pavement management system, traffic simulation, backpropagation neural network, performance modeling, measurement errors, linear programming, lifecycle optimization

Procedia PDF Downloads 443
743 Efficiency Validation of Hybrid Geothermal and Radiant Cooling System Implementation in Hot and Humid Climate Houses of Saudi Arabia

Authors: Jamil Hijazi, Stirling Howieson

Abstract:

Over one-quarter of the Kingdom of Saudi Arabia’s total oil production (2.8 million barrels a day) is used for electricity generation. The built environment is estimated to consume 77% of the total energy production. Of this amount, air conditioning systems consume about 80%. Apart from considerations surrounding global warming and CO2 production it has to be recognised that oil is a finite resource and the KSA like many other oil rich countries will have to start to consider a horizon where hydro-carbons are not the dominant energy resource. The employment of hybrid ground cooling pipes in combination with black body solar collection and radiant night cooling systems may have the potential to displace a significant proportion of oil currently used to run conventional air conditioning plant. This paper presents an investigation into the viability of such hybrid systems with the specific aim of reducing carbon emissions while providing all year round thermal comfort in a typical Saudi Arabian urban housing block. At the outset air and soil temperatures were measured in the city of Jeddah. A parametric study then was carried out by computational simulation software (Design Builder) that utilised the field measurements and predicted the cooling energy consumption of both a base case and an ideal scenario (typical block retro-fitted with insulation, solar shading, ground pipes integrated with hypocaust floor slabs/ stack ventilation and radiant cooling pipes embed in floor).Initial simulation results suggest that careful ‘ecological design’ combined with hybrid radiant and ground pipe cooling techniques can displace air conditioning systems, producing significant cost and carbon savings (both capital and running) without appreciable deprivation of amenity.

Keywords: energy efficiency, ground pipe, hybrid cooling, radiative cooling, thermal comfort

Procedia PDF Downloads 243
742 Gamma Irradiated Sodium Alginate and Phosphorus Fertilizer Enhances Seed Trigonelline Content, Biochemical Parameters and Yield Attributes of Fenugreek (Trigonella foenum-graecum L.)

Authors: Tariq Ahmad Dar, Moinuddin, M. Masroor A. Khan

Abstract:

There is considerable need in enhancing the content and yield of active constituents of medicinal plants keeping in view their massive demand worldwide. Different strategies have been employed to enhance the active constituents of medicinal plants and the use of phytohormones has been proved effective in this regard. Gamma-irradiated Sodium alginate (ISA) is known to elicit an array of plant defense responses and biological activities in plants. Considering the medicinal importance, a pot experiment was conducted to explore the effect of ISA and phosphorus on growth, yield and quality of fenugreek (Trigonella foenum-graecum L.). ISA spray treatments (0, 40, 80 and 120 mg L-1) were applied alone and in combination with 40 kg P ha-1 (P40). Crop performance was assessed in terms of plant growth characteristics, physiological attributes, seed yield and the content of seed trigonelline. Of the ten-treatments, P40 + 80 mg L−1 of ISA proved the best. The results showed that foliar spray of ISA alone or in combination with P40 augmented the plant vegetative growth, enzymatic activities, trigonelline content, trigonelline yield and economic yield of fenugreek. Application of 80 mg L−1 of ISA applied with P40 gave the best results for almost all the parameters studied compared to control or to 80 mg L−1 of ISA applied alone. This treatment increased the total content of chlorophyll, carotenoids, leaf -N, -P and -K and trigonelline compared to the control by 24.85 and 27.40%, 15 and 23.52%, 18.70 and 16.84%, 15.88 and 18.92%, 12 and 14.44%, at 60 and 90 DAS respectively. The combined application of 80 mg L−1 of ISA along with P40 resulted in the maximum increase in seed yield, trigonelline content and trigonelline yield by146, 34 and 232.41%, respectively, over the control. Gel permeation chromatography revealed the formation of low molecular weight fractions in ISA samples, containing even less than 20,000 molecular weight oligomers, which might be responsible for plant growth promotion in this study. Trigonelline content was determined by reverse phase high performance liquid chromatography (HPLC) with C-18 column.

Keywords: gamma-irradiated sodium alginate, phosphorus, gel permeation chromatography, HPLC, trigonelline content, yield

Procedia PDF Downloads 307
741 Stress Evaluation at Lower Extremity during Walking with Unstable Shoe

Authors: Sangbaek Park, Seungju Lee, Soo-Won Chae

Abstract:

Unstable shoes are known to strengthen lower extremity muscles and improve gait ability and to change the user’s gait pattern. The change in gait pattern affects human body enormously because the walking is repetitive and steady locomotion in daily life. It is possible to estimate the joint motion including joint moment, force and inertia effect using kinematic and kinetic analysis. However, the change of internal stress at the articular cartilage has not been possible to estimate. The purpose of this research is to evaluate the internal stress of human body during gait with unstable shoes. In this study, FE analysis was combined with motion capture experiment to obtain the boundary condition and loading condition during walking. Motion capture experiments were performed with a participant during walking with normal shoes and with unstable shoes. Inverse kinematics and inverse kinetic analysis was performed with OpenSim. The joint angle and muscle forces were estimated as results of inverse kinematics and kinetics analysis. A detailed finite element (FE) lower extremity model was constructed. The joint coordinate system was added to the FE model and the joint coordinate system was coincided with OpenSim model’s coordinate system. Finally, the joint angles at each phase of gait were used to transform the FE model’s posture according to actual posture from motion capture. The FE model was transformed into the postures of three major phases (1st peak of ground reaction force, mid stance and 2nd peak of ground reaction force). The direction and magnitude of muscle force were estimated by OpenSim and were applied to the FE model’s attachment point of each muscle. Then FE analysis was performed to compare the stress at knee cartilage during gait with normal shoes and unstable shoes.

Keywords: finite element analysis, gait analysis, human model, motion capture

Procedia PDF Downloads 306
740 Disaster Response Training Simulator Based on Augmented Reality, Virtual Reality, and MPEG-DASH

Authors: Sunho Seo, Younghwan Shin, Jong-Hong Park, Sooeun Song, Junsung Kim, Jusik Yun, Yongkyun Kim, Jong-Moon Chung

Abstract:

In order to effectively cope with large and complex disasters, disaster response training is needed. Recently, disaster response training led by the ROK (Republic of Korea) government is being implemented through a 4 year R&D project, which has several similar functions as the HSEEP (Homeland Security Exercise and Evaluation Program) of the United States, but also has several different features as well. Due to the unpredictiveness and diversity of disasters, existing training methods have many limitations in providing experience in the efficient use of disaster incident response and recovery resources. Always, the challenge is to be as efficient and effective as possible using the limited human and material/physical resources available based on the given time and environmental circumstances. To enable repeated training under diverse scenarios, an AR (Augmented Reality) and VR (Virtual Reality) combined simulator is under development. Unlike existing disaster response training, simulator based training (that allows remote login simultaneous multi-user training) enables freedom from limitations in time and space constraints, and can be repeatedly trained with different combinations of functions and disaster situations. There are related systems such as ADMS (Advanced Disaster Management Simulator) developed by ETC simulation and HLS2 (Homeland Security Simulation System) developed by ELBIT system. However, the ROK government needs a simulator custom made to the country's environment and disaster types, and also combines the latest information and communication technologies, which include AR, VR, and MPEG-DASH (Moving Picture Experts Group - Dynamic Adaptive Streaming over HTTP) technology. In this paper, a new disaster response training simulator is proposed to overcome the limitation of existing training systems, and adapted to actual disaster situations in the ROK, where several technical features are described.

Keywords: augmented reality, emergency response training simulator, MPEG-DASH, virtual reality

Procedia PDF Downloads 288
739 Predicting the Turbulence Intensity, Excess Energy Available and Potential Power Generated by Building Mounted Wind Turbines over Four Major UK City

Authors: Emejeamara Francis

Abstract:

The future of potentials wind energy applications within suburban/urban areas are currently faced with various problems. These include insufficient assessment of urban wind resource, and the effectiveness of commercial gust control solutions as well as unavailability of effective and cheaper valuable tools for scoping the potentials of urban wind applications within built-up environments. In order to achieve effective assessment of the potentials of urban wind installations, an estimation of the total energy that would be available to them were effective control systems to be used, and evaluating the potential power to be generated by the wind system is required. This paper presents a methodology of predicting the power generated by a wind system operating within an urban wind resource. This method was developed by using high temporal resolution wind measurements from eight potential sites within the urban and suburban environment as inputs to a vertical axis wind turbine multiple stream tube model. A relationship between the unsteady performance coefficient obtained from the stream tube model results and turbulence intensity was demonstrated. Hence, an analytical methodology for estimating the unsteady power coefficient at a potential turbine site is proposed. This is combined with analytical models that were developed to predict the wind speed and the excess energy (EEC) available in estimating the potential power generated by wind systems at different heights within a built environment. Estimates of turbulence intensities, wind speed, EEC and turbine performance based on the current methodology allow a more complete assessment of available wind resource and potential urban wind projects. This methodology is applied to four major UK cities namely Leeds, Manchester, London and Edinburgh and the potential to map the turbine performance at different heights within a typical urban city is demonstrated.

Keywords: small-scale wind, turbine power, urban wind energy, turbulence intensity, excess energy content

Procedia PDF Downloads 260
738 Assessing the Impact of Heatwaves on Intertidal Mudflat Colonized by an Exotic Mussel

Authors: Marie Fouet, Olivier Maire, Cécile Masse, Hugues Blanchet, Salomé Coignard, Nicolas Lavesque, Guillaume Bernard

Abstract:

Exacerbated by global change, extreme climatic events such as atmospheric and marine heat waves may interact with the spread of non-indigenous species and their associated impacts on marine ecosystems. Since the 1970’s, the introduction of non-indigenous species due to oyster exchanges has been numerous. Among them, the Asian date mussel Arcuatula senhousia has colonized a large number of ecosystems worldwide (e.g., California, New Zealand, Italy). In these places, A.senhousia led to important habitat modifications in the benthic compartment through physical, biological, and biogeochemical effects associated with the development of dense mussel populations. In Arcachon Bay (France), a coastal lagoon of the French Atlantic and hotspot of oyster farming, abundances of A. senhousia recently increased, following a lag time of ca. 20 years since the first record of the species in 2002. Here, we addressed the potential effects of the interaction between A. senhousia invasion and heatwave intensity on ecosystem functioning within an intertidal mudflat. More precisely, two realistic intensities (“High” and “Severe”) of combined marine and atmospheric heatwaves have been simulated in an experimental tidal mesocosm system onto which naturally varying densities of A. senhousia and associated benthic communities were exposed in sediment cores collected in situ. Following a six-day exposure, community-scale responses were assessed by measuring benthic metabolism (oxygen and nutrient fluxes) in each core. Results show that besides significantly enhanced benthic metabolism with increasing heatwave intensity, mussel density clearly mediated the magnitude of the community-scale response, thereby highlighting the importance of understanding the interactive effects of environmental stressors co-occurring with non-indigenous species and their dependencies for a better assessment of their impacts.

Keywords: arcuatula senhousia, benthic habitat, ecosystem functioning, heatwaves, metabolism

Procedia PDF Downloads 45