Search results for: feature combination
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4568

Search results for: feature combination

608 Mathematical Modelling of Biogas Dehumidification by Using of Counterflow Heat Exchanger

Authors: Staņislavs Gendelis, Andris Jakovičs, Jānis Ratnieks, Aigars Laizāns, Dāvids Vardanjans

Abstract:

Dehumidification of biogas at the biomass plants is very important to provide the energy efficient burning of biomethane at the outlet. A few methods are widely used to reduce the water content in biogas, e.g. chiller/heat exchanger based cooling, usage of different adsorbents like PSA, or the combination of such approaches. A quite different method of biogas dehumidification is offered and analyzed in this paper. The main idea is to direct the flow of biogas from the plant around it downwards; thus, creating additional insulation layer. As the temperature in gas shell layer around the plant will decrease from ~ 38°C to 20°C in the summer or even to 0°C in the winter, condensation of water vapor occurs. The water from the bottom of the gas shell can be collected and drain away. In addition, another upward shell layer is created after the condensate drainage place on the outer side to further reducing heat losses. Thus, counterflow biogas heat exchanger is created around the biogas plant. This research work deals with the numerical modelling of biogas flow, taking into account heat exchange and condensation on cold surfaces. Different kinds of boundary conditions (air and ground temperatures in summer/winter) and various physical properties of constructions (insulation between layers, wall thickness) are included in the model to make it more general and useful for different biogas flow conditions. The complexity of this problem is fact, that the temperatures in both channels are conjugated in case of low thermal resistance between layers. MATLAB programming language is used for multiphysical model development, numerical calculations and result visualization. Experimental installation of a biogas plant’s vertical wall with an additional 2 layers of polycarbonate sheets with the controlled gas flow was set up to verify the modelling results. Gas flow at inlet/outlet, temperatures between the layers and humidity were controlled and measured during a number of experiments. Good correlation with modelling results for vertical wall section allows using of developed numerical model for an estimation of parameters for the whole biogas dehumidification system. Numerical modelling of biogas counterflow heat exchanger system placed on the plant’s wall for various cases allows optimizing of thickness for gas layers and insulation layer to ensure necessary dehumidification of the gas under different climatic conditions. Modelling of system’s defined configuration with known conditions helps to predict the temperature and humidity content of the biogas at the outlet.

Keywords: biogas dehumidification, numerical modelling, condensation, biogas plant experimental model

Procedia PDF Downloads 549
607 Meeting the Energy Balancing Needs in a Fully Renewable European Energy System: A Stochastic Portfolio Framework

Authors: Iulia E. Falcan

Abstract:

The transition of the European power sector towards a clean, renewable energy (RE) system faces the challenge of meeting power demand in times of low wind speed and low solar radiation, at a reasonable cost. This is likely to be achieved through a combination of 1) energy storage technologies, 2) development of the cross-border power grid, 3) installed overcapacity of RE and 4) dispatchable power sources – such as biomass. This paper uses NASA; derived hourly data on weather patterns of sixteen European countries for the past twenty-five years, and load data from the European Network of Transmission System Operators-Electricity (ENTSO-E), to develop a stochastic optimization model. This model aims to understand the synergies between the four classes of technologies mentioned above and to determine the optimal configuration of the energy technologies portfolio. While this issue has been addressed before, it was done so using deterministic models that extrapolated historic data on weather patterns and power demand, as well as ignoring the risk of an unbalanced grid-risk stemming from both the supply and the demand side. This paper aims to explicitly account for the inherent uncertainty in the energy system transition. It articulates two levels of uncertainty: a) the inherent uncertainty in future weather patterns and b) the uncertainty of fully meeting power demand. The first level of uncertainty is addressed by developing probability distributions for future weather data and thus expected power output from RE technologies, rather than known future power output. The latter level of uncertainty is operationalized by introducing a Conditional Value at Risk (CVaR) constraint in the portfolio optimization problem. By setting the risk threshold at different levels – 1%, 5% and 10%, important insights are revealed regarding the synergies of the different energy technologies, i.e., the circumstances under which they behave as either complements or substitutes to each other. The paper concludes that allowing for uncertainty in expected power output - rather than extrapolating historic data - paints a more realistic picture and reveals important departures from results of deterministic models. In addition, explicitly acknowledging the risk of an unbalanced grid - and assigning it different thresholds - reveals non-linearity in the cost functions of different technology portfolio configurations. This finding has significant implications for the design of the European energy mix.

Keywords: cross-border grid extension, energy storage technologies, energy system transition, stochastic portfolio optimization

Procedia PDF Downloads 170
606 Empirical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;

Procedia PDF Downloads 82
605 Sequential Padding: A Method to Improve the Impact Resistance in Body Armor Materials

Authors: Ankita Srivastava, Bhupendra S. Butola, Abhijit Majumdar

Abstract:

Application of shear thickening fluid (STF) has been proved to increase the impact resistance performance of the textile structures to further use it as a body armor material. In the present research, STF was applied on Kevlar woven fabric to make the structure lightweight and flexible while improving its impact resistance performance. It was observed that getting a fair amount of add-on of STF on Kevlar fabric is difficult as Kevlar fabric comes with a pre-coating of PTFE which hinders its absorbency. Hence, a method termed as sequential padding is developed in the present study to improve the add-on of STF on Kevlar fabric. Contrary to the conventional process, where Kevlar fabric is treated with STF once using any one pressure, in sequential padding method, the Kevlar fabrics were treated twice in a sequential manner using combination of two pressures together in a sample. 200 GSM Kevlar fabrics were used in the present study. STF was prepared by adding PEG with 70% (w/w) nano-silica concentration. Ethanol was added with the STF at a fixed ratio to reduce viscosity. A high-speed homogenizer was used to make the dispersion. Total nine STF treated Kevlar fabric samples were prepared by using varying combinations and sequences of three levels of padding pressure {0.5, 1.0 and 2.0 bar). The fabrics were dried at 80°C for 40 minutes in a hot air oven to evaporate ethanol. Untreated and STF treated fabrics were tested for add-on%. Impact resistance performance of samples was also tested on dynamic impact tester at a fixed velocity of 6 m/s. Further, to observe the impact resistance performance in actual condition, low velocity ballistic test with 165 m/s velocity was also performed to confirm the results of impact resistance test. It was observed that both add-on% and impact energy absorption of Kevlar fabrics increases significantly with sequential padding process as compared to untreated as well as single stage padding process. It was also determined that impact energy absorption is significantly better in STF treated Kevlar fabrics when 1st padding pressure is higher, and 2nd padding pressure is lower. It is also observed that impact energy absorption of sequentially padded Kevlar fabric shows almost 125% increase in ballistic impact energy absorption (40.62 J) as compared to untreated fabric (18.07 J).The results are owing to the fact that the treatment of fabrics at high pressure during the first padding is responsible for uniform distribution of STF within the fabric structures. While padding with second lower pressure ensures the high add-on of STF for over-all improvement in the impact resistance performance of the fabric. Therefore, it is concluded that sequential padding process may help to improve the impact performance of body armor materials based on STF treated Kevlar fabrics.

Keywords: body armor, impact resistance, Kevlar, shear thickening fluid

Procedia PDF Downloads 241
604 Field-Testing a Digital Music Notebook

Authors: Rena Upitis, Philip C. Abrami, Karen Boese

Abstract:

The success of one-on-one music study relies heavily on the ability of the teacher to provide sufficient direction to students during weekly lessons so that they can successfully practice from one lesson to the next. Traditionally, these instructions are given in a paper notebook, where the teacher makes notes for the students after describing a task or demonstrating a technique. The ability of students to make sense of these notes varies according to their understanding of the teacher’s directions, their motivation to practice, their memory of the lesson, and their abilities to self-regulate. At best, the notes enable the student to progress successfully. At worst, the student is left rudderless until the next lesson takes place. Digital notebooks have the potential to provide a more interactive and effective bridge between music lessons than traditional pen-and-paper notebooks. One such digital notebook, Cadenza, was designed to streamline and improve teachers’ instruction, to enhance student practicing, and to provide the means for teachers and students to communicate between lessons. For example, Cadenza contains a video annotator, where teachers can offer real-time guidance on uploaded student performances. Using the checklist feature, teachers and students negotiate the frequency and type of practice during the lesson, which the student can then access during subsequent practice sessions. Following the tenets of self-regulated learning, goal setting and reflection are also featured. Accordingly, the present paper addressed the following research questions: (1) How does the use of the Cadenza digital music notebook engage students and their teachers?, (2) Which features of Cadenza are most successful?, (3) Which features could be improved?, and (4) Is student learning and motivation enhanced with the use of the Cadenza digital music notebook? The paper describes the results 10 months of field-testing of Cadenza, structured around the four research questions outlined. Six teachers and 65 students took part in the study. Data were collected through video-recorded lesson observations, digital screen captures, surveys, and interviews. Standard qualitative protocols for coding results and identifying themes were employed to analyze the results. The results consistently indicated that teachers and students embraced the digital platform offered by Cadenza. The practice log and timer, the real-time annotation tool, the checklists, the lesson summaries, and the commenting features were found to be the most valuable functions, by students and teachers alike. Teachers also reported that students progressed more quickly with Cadenza, and received higher results in examinations than those students who were not using Cadenza. Teachers identified modifications to Cadenza that would make it an even more powerful way to support student learning. These modifications, once implemented, will move the tool well past its traditional notebook uses to new ways of motivating students to practise between lessons and to communicate with teachers about their learning. Improvements to the tool called for by the teachers included the ability to duplicate archived lessons, allowing for split screen viewing, and adding goal setting to the teacher window. In the concluding section, proposed modifications and their implications for self-regulated learning are discussed.

Keywords: digital music technologies, electronic notebooks, self-regulated learning, studio music instruction

Procedia PDF Downloads 254
603 Anaerobic Digestion of Green Wastes at Different Solids Concentrations and Temperatures to Enhance Methane Generation

Authors: A. Bayat, R. Bello-Mendoza, D. G. Wareham

Abstract:

Two major categories of green waste are fruit and vegetable (FV) waste and garden and yard (GY) waste. Although, anaerobic digestions (AD) is able to manage FV waste; there is less confidence in the conditions for AD to handle GY wastes (grass, leaves, trees and bush trimmings); mainly because GY contains lignin and other recalcitrant organics. GY in the dry state (TS ≥ 15 %) can be digested at mesophilic temperatures; however, little methane data has been reported under thermophilic conditions, where conceivably better methane yields could be achieved. In addition, it is suspected that at lower solids concentrations, the methane yield could be increased. As such, the aim of this research is to find the temperature and solids concentration conditions that produce the most methane; under two different temperature regimes (mesophilic, thermophilic) and three solids states (i.e. 'dry', 'semi-dry' and 'wet'). Twenty liters of GY waste was collected from a public park located in the northern district in Tehran. The clippings consisted of freshly cut grass as well as dry branches and leaves. The GY waste was chopped before being fed into a mechanical blender that reduced it to a paste-like consistency. An initial TS concentration of approximately 38 % was achieved. Four hundred mL of anaerobic inoculum (average total solids (TS) concentration of 2.03 ± 0.131 % of which 73.4% were volatile solid (VS), soluble chemical oxygen demand (sCOD) of 4.59 ± 0.3 g/L) was mixed with the GY waste substrate paste (along with distilled water) to achieve a TS content of approximately 20 %. For comparative purposes, approximately 20 liters of FV waste was ground in the same manner as the GY waste. Since FV waste has a much higher natural water content than GY, it was dewatered to obtain a starting TS concentration in the dry solid-state range (TS ≥ 15 %). Three samples were dewatered to an average starting TS concentration of 32.71 %. The inoculum was added (along with distilled water) to dilute the initial FV TS concentrations down to semi-dry conditions (10-15 %) and wet conditions (below 10 %). Twelve 1-L batch bioreactors were loaded simultaneously with either GY or FV waste at TS solid concentrations ranging from 3.85 ± 1.22 % to 20.11 ± 1.23 %. The reactors were sealed and were operated for 30 days while being immersed in water baths to maintain a constant temperature of 37 ± 0.5 °C (mesophilic) or 55 ± 0.5 °C (thermophilic). A maximum methane yield of 115.42 (L methane/ kg VS added) was obtained for the GY thermophilic-wet AD combination. Methane yield was enhanced by 240 % compared to the GY waste mesophilic-dry condition. The results confirm that high temperature regimes and small solids concentrations are conditions that enhance methane yield from GY waste. A similar trend was observed for the anaerobic digestion of FV waste. Furthermore, a maximum value of VS (53 %) and sCOD (84 %) reduction was achieved during the AD of GY waste under the thermophilic-wet condition.

Keywords: anaerobic digestion, thermophilic, mesophilic, total solids concentration

Procedia PDF Downloads 141
602 Virtual Reality in COVID-19 Stroke Rehabilitation: Preliminary Outcomes

Authors: Kasra Afsahi, Maryam Soheilifar, S. Hossein Hosseini

Abstract:

Background: There is growing evidence that Cerebral Vascular Accident (CVA) can be a consequence of Covid-19 infection. Understanding novel treatment approaches are important in optimizing patient outcomes. Case: This case explores the use of Virtual Reality (VR) in the treatment of a 23-year-old COVID-positive female presenting with left hemiparesis in August 2020. Imaging showed right globus pallidus, thalamus, and internal capsule ischemic stroke. Conventional rehabilitation was started two weeks later, with virtual reality (VR) included. This game-based virtual reality (VR) technology developed for stroke patients was based on upper extremity exercises and functions for stroke. Physical examination showed left hemiparesis with muscle strength 3/5 in the upper extremity and 4/5 in the lower extremity. The range of motion of the shoulder was 90-100 degrees. The speech exam showed a mild decrease in fluency. Mild lower lip dynamic asymmetry was seen. Babinski was positive on the left. Gait speed was decreased (75 steps per minute). Intervention: Our game-based VR system was developed based on upper extremity physiotherapy exercises for post-stroke patients to increase the active, voluntary movement of the upper extremity joints and improve the function. The conventional program was initiated with active exercises, shoulder sanding for joint ROMs, walking shoulder, shoulder wheel, and combination movements of the shoulder, elbow, and wrist joints, alternative flexion-extension, pronation-supination movements, Pegboard and Purdo pegboard exercises. Also, fine movements included smart gloves, biofeedback, finger ladder, and writing. The difficulty of the game increased at each stage of the practice with progress in patient performances. Outcome: After 6 weeks of treatment, gait and speech were normal and upper extremity strength was improved to near normal status. No adverse effects were noted. Conclusion: This case suggests that VR is a useful tool in the treatment of a patient with covid-19 related CVA. The safety of newly developed instruments for such cases provides new approaches to improve the therapeutic outcomes and prognosis as well as increased satisfaction rate among patients.

Keywords: covid-19, stroke, virtual reality, rehabilitation

Procedia PDF Downloads 141
601 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging

Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen

Abstract:

Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.

Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques

Procedia PDF Downloads 99
600 Impact of Informal Institutions on Development: Analyzing the Socio-Legal Equilibrium of Relational Contracts in India

Authors: Shubhangi Roy

Abstract:

Relational Contracts (informal understandings not enforceable by law) are a common feature of most economies. However, their dominance is higher in developing countries. Such informality of economic sectors is often co-related to lower economic growth. The aim of this paper is to investigate whether informal arrangements i.e. relational contracts are a cause or symptom of lower levels of economic and/or institutional development. The methodology followed involves an initial survey of 150 test subjects in Northern India. The subjects are all members of occupations where they frequently transact ensuring uniformity in transaction volume. However, the subjects are from varied socio-economic backgrounds to ensure sufficient variance in transaction values allowing us to understand the relationship between the amount of money involved to the method of transaction used, if any. Questions asked are quantitative and qualitative with an aim to observe both the behavior and motivation behind such behavior. An overarching similarity observed during the survey across all subjects’ responses is that in an economy like India with pervasive corruption and delayed litigation, economy participants have created alternative social sanctions to deal with non-performers. In a society that functions predominantly on caste, class and gender classifications, these sanctions could, in fact, be more cumbersome for a potential rule-breaker than the legal ramifications. It, therefore, is a symptom of weak formal regulatory enforcement and dispute settlement mechanism. Additionally, the study bifurcates such informal arrangements into two separate systems - a) when it exists in addition to and augments a legal framework creating an efficient socio-legal equilibrium or; b) in conflict with the legal system in place. This categorization is an important step in regulating informal arrangements. Instead of considering the entire gamut of such arrangements as counter-development, it helps decision-makers understand when to dismantle (latter) and when to pivot around existing informal systems (former). The paper hypothesizes that those social arrangements that support the formal legal frameworks allow for cheaper enforcement of regulations with lower enforcement costs burden on the state mechanism. On the other hand, norms which contradict legal rules will undermine the formal framework. Law infringement, in presence of these norms, will have no impact on the reputation of the business or individual outside of the punishment imposed under the law. It is especially exacerbated in the Indian legal system where enforcement of penalties for non-performance of contracts is low. In such a situation, the social norm will be adhered to more strictly by the individuals rather than the legal norms. This greatly undermines the role of regulations. The paper concludes with recommendations that allow policy-makers and legal systems to encourage the former category of informal arrangements while discouraging norms that undermine legitimate policy objectives. Through this investigation, we will be able to expand our understanding of tools of market development beyond regulations. This will allow academics and policymakers to harness social norms for less disruptive and more lasting growth.

Keywords: distribution of income, emerging economies, relational contracts, sample survey, social norms

Procedia PDF Downloads 165
599 A Method to Predict the Thermo-Elastic Behavior of Laser-Integrated Machine Tools

Authors: C. Brecher, M. Fey, F. Du Bois-Reymond, S. Neus

Abstract:

Additive manufacturing has emerged into a fast-growing section within the manufacturing technologies. Established machine tool manufacturers, such as DMG MORI, recently presented machine tools combining milling and laser welding. By this, machine tools can realize a higher degree of flexibility and a shorter production time. Still there are challenges that have to be accounted for in terms of maintaining the necessary machining accuracy - especially due to thermal effects arising through the use of high power laser processing units. To study the thermal behavior of laser-integrated machine tools, it is essential to analyze and simulate the thermal behavior of machine components, individual and assembled. This information will help to design a geometrically stable machine tool under the influence of high power laser processes. This paper presents an approach to decrease the loss of machining precision due to thermal impacts. Real effects of laser machining processes are considered and thus enable an optimized design of the machine tool, respective its components, in the early design phase. Core element of this approach is a matched FEM model considering all relevant variables arising, e.g. laser power, angle of laser beam, reflective coefficients and heat transfer coefficient. Hence, a systematic approach to obtain this matched FEM model is essential. Indicating the thermal behavior of structural components as well as predicting the laser beam path, to determine the relevant beam intensity on the structural components, there are the two constituent aspects of the method. To match the model both aspects of the method have to be combined and verified empirically. In this context, an essential machine component of a five axis machine tool, the turn-swivel table, serves as the demonstration object for the verification process. Therefore, a turn-swivel table test bench as well as an experimental set-up to measure the beam propagation were developed and are described in the paper. In addition to the empirical investigation, a simulative approach of the described types of experimental examination is presented. Concluding, it is shown that the method and a good understanding of the two core aspects, the thermo-elastic machine behavior and the laser beam path, as well as their combination helps designers to minimize the loss of precision in the early stages of the design phase.

Keywords: additive manufacturing, laser beam machining, machine tool, thermal effects

Procedia PDF Downloads 265
598 The Effect of Rheological Properties and Spun/Meltblown Fiber Characteristics on “Hotmelt Bleed through” Behavior in High Speed Textile Backsheet Lamination Process

Authors: Kinyas Aydin, Fatih Erguney, Tolga Ceper, Serap Ozay, Ipar N. Uzun, Sebnem Kemaloglu Dogan, Deniz Tunc

Abstract:

In order to meet high growth rates in baby diaper industry worldwide, the high-speed textile backsheet lamination lines have recently been introduced to the market for non-woven/film lamination applications. It is a process where two substrates are bonded to each other via hotmelt adhesive (HMA). Nonwoven (NW) lamination system basically consists of 4 components; polypropylene (PP) nonwoven, polyethylene (PE) film, HMA and applicator system. Each component has a substantial effect on the process efficiency of continuous line and final product properties. However, for a precise subject cover, we will be addressing only the main challenges and possible solutions in this paper. The NW is often produced by spunbond method (SSS or SMS configuration) and has a 10-12 gsm (g/m²) basis weight. The NW rolls can have a width and length up to 2.060 mm and 30.000 linear meters, respectively. The PE film is the 2ⁿᵈ component in TBS lamination, which is usually a 12-14 gsm blown or cast breathable film. HMA is a thermoplastic glue (mostly rubber based) that can be applied in a large range of viscosity ranges. The main HMA application technology in TBS lamination is the slot die application in which HMA is spread on the top of the NW along the whole width at high temperatures in the melt form. Then, the NW is passed over chiller rolls with a certain open time depending on the line speed. HMAs are applied at certain levels in order to provide a proper de-lamination strength in cross and machine directions to the entire structure. Current TBS lamination line speed and width can be as high as 800 m/min and 2100 mm, respectively. They also feature an automated web control tension system for winders and unwinders. In order to run a continuous trouble-free mass production campaign on the fast industrial TBS lines, rheological properties of HMAs and micro-properties of NWs can have adverse effects on the line efficiency and continuity. NW fiber orientation and fineness, as well as spun/melt blown composition fabric micro-level properties, are the significant factors to affect the degree of “HMA bleed through.” As a result of this problem, frequent line stops are observed to clean the glue that is being accumulated on the chiller rolls, which significantly reduces the line efficiency. HMA rheology is also important and to eliminate any bleed through the problem; one should have a good understanding of rheology driven potential complications. So, the applied viscosity/temperature should be optimized in accordance with the line speed, line width, NW characteristics and the required open time for a given HMA formulation. In this study, we will show practical aspects of potential preventative actions to minimize the HMA bleed through the problem, which may stem from both HMA rheological properties and NW spun melt/melt blown fiber characteristics.

Keywords: breathable, hotmelt, nonwoven, textile backsheet lamination, spun/melt blown

Procedia PDF Downloads 359
597 Sphere in Cube Grid Approach to Modelling of Shale Gas Production Using Non-Linear Flow Mechanisms

Authors: Dhruvit S. Berawala, Jann R. Ursin, Obrad Slijepcevic

Abstract:

Shale gas is one of the most rapidly growing forms of natural gas. Unconventional natural gas deposits are difficult to characterize overall, but in general are often lower in resource concentration and dispersed over large areas. Moreover, gas is densely packed into the matrix through adsorption which accounts for large volume of gas reserves. Gas production from tight shale deposits are made possible by extensive and deep well fracturing which contacts large fractions of the formation. The conventional reservoir modelling and production forecasting methods, which rely on fluid-flow processes dominated by viscous forces, have proved to be very pessimistic and inaccurate. This paper presents a new approach to forecast shale gas production by detailed modeling of gas desorption, diffusion and non-linear flow mechanisms in combination with statistical representation of these processes. The representation of the model involves a cube as a porous media where free gas is present and a sphere (SiC: Sphere in Cube model) inside it where gas is adsorbed on to the kerogen or organic matter. Further, the sphere is considered consisting of many layers of adsorbed gas in an onion-like structure. With pressure decline, the gas desorbs first from the outer most layer of sphere causing decrease in its molecular concentration. The new available surface area and change in concentration triggers the diffusion of gas from kerogen. The process continues until all the gas present internally diffuses out of the kerogen, gets adsorbs onto available surface area and then desorbs into the nanopores and micro-fractures in the cube. Each SiC idealizes a gas pathway and is characterized by sphere diameter and length of the cube. The diameter allows to model gas storage, diffusion and desorption; the cube length takes into account the pathway for flow in nanopores and micro-fractures. Many of these representative but general cells of the reservoir are put together and linked to a well or hydraulic fracture. The paper quantitatively describes these processes as well as clarifies the geological conditions under which a successful shale gas production could be expected. A numerical model has been derived which is then compiled on FORTRAN to develop a simulator for the production of shale gas by considering the spheres as a source term in each of the grid blocks. By applying SiC to field data, we demonstrate that the model provides an effective way to quickly access gas production rates from shale formations. We also examine the effect of model input properties on gas production.

Keywords: adsorption, diffusion, non-linear flow, shale gas production

Procedia PDF Downloads 165
596 Supercritical Water Gasification of Organic Wastes for Hydrogen Production and Waste Valorization

Authors: Laura Alvarez-Alonso, Francisco Garcia-Carro, Jorge Loredo

Abstract:

Population growth and industrial development imply an increase in the energy demands and the problems caused by emissions of greenhouse effect gases, which has inspired the search for clean sources of energy. Hydrogen (H₂) is expected to play a key role in the world’s energy future by replacing fossil fuels. The properties of H₂ make it a green fuel that does not generate pollutants and supplies sufficient energy for power generation, transportation, and other applications. Supercritical Water Gasification (SCWG) represents an attractive alternative for the recovery of energy from wastes. SCWG allows conversion of a wide range of raw materials into a fuel gas with a high content of hydrogen and light hydrocarbons through their treatment at conditions higher than those that define the critical point of water (temperature of 374°C and pressure of 221 bar). Methane used as a transport fuel is another important gasification product. The number of different uses of gas and energy forms that can be produced depending on the kind of material gasified and type of technology used to process it, shows the flexibility of SCWG. This feature allows it to be integrated with several industrial processes, as well as power generation systems or waste-to-energy production systems. The final aim of this work is to study which conditions and equipment are the most efficient and advantageous to explore the possibilities to obtain streams rich in H₂ from oily wastes, which represent a major problem both for the environment and human health throughout the world. In this paper, the relative complexity of technology needed for feasible gasification process cycles is discussed with particular reference to the different feedstocks that can be used as raw material, different reactors, and energy recovery systems. For this purpose, a review of the current status of SCWG technologies has been carried out, by means of different classifications based on key features as the feed treated or the type of reactor and other apparatus. This analysis allows to improve the technology efficiency through the study of model calculations and its comparison with experimental data, the establishment of kinetics for chemical reactions, the analysis of how the main reaction parameters affect the yield and composition of products, or the determination of the most common problems and risks that can occur. The results of this work show that SCWG is a promising method for the production of both hydrogen and methane. The most significant choices of design are the reactor type and process cycle, which can be conveniently adopted according to waste characteristics. Regarding the future of the technology, the design of SCWG plants is still to be optimized to include energy recovery systems in order to reduce costs of equipment and operation derived from the high temperature and pressure conditions that are necessary to convert water to the SC state, as well as to find solutions to remove corrosion and clogging of components of the reactor.

Keywords: hydrogen production, organic wastes, supercritical water gasification, system integration, waste-to-energy

Procedia PDF Downloads 147
595 Flexural Properties of Typha Fibers Reinforced Polyester Composite

Authors: Sana Rezig, Yosr Ben Mlik, Mounir Jaouadi, Foued Khoffi, Slah Msahli, Bernard Durand

Abstract:

Increasing interest in environmental concerns, natural fibers are once again being considered as reinforcements for polymer composites. The main objective of this study is to explore another natural resource, Typha fiber; which is renewable without production cost and available abundantly in nature. The aim of this study was to study the flexural properties of composite resin with and without reinforcing Typha leaf and stem fibers. The specimens were made by the hand-lay-up process using polyester matrix. In our work, we focused on the effect of various treatment conditions (sea water, alkali treatment and a combination of the two treatments), as a surface modifier, on the flexural properties of the Typha fibers reinforced polyester composites. Moreover, weight ratio of Typha leaf or stem fibers was investigated. Besides, both fibers from leaf and stem of Typha plant were used to evaluate the reinforcing effect. Another parameter, which is reinforcement structure, was investigated. In fact, a first composite was made with air-laid nonwoven structure of fibers. A second composite was with a mixture of fibers and resin for each kind of treatment. Results show that alkali treatment and combined process provided better mechanical properties of composites in comparison with fiber treated by sea water. The fiber weight ratio influenced the flexural properties of composites. Indeed, a maximum value of flexural strength of 69.8 and 62,32 MPa with flexural modulus of 6.16 and 6.34 GPawas observed respectively for composite reinforced with leaf and stem fibers for 12.6 % fiber weight ratio. For the different treatments carried out, the treatment using caustic soda, whether alone or after retting seawater, show the best results because it improves adhesion between the polyester matrix and the fibers of reinforcement. SEM photographs were made to ascertain the effects of the surface treatment of the fibers. By varying the structure of the fibers of Typha, the reinforcement used in bulk shows more effective results as that used in the non-woven structure. In addition, flexural strength rises with about (65.32 %) in the case of composite reinforced with a mixture of 12.6% leaf fibers and (27.45 %) in the case of a composite reinforced with a nonwoven structure of 12.6 % of leaf fibers. Thus, to better evaluate the effect of the fiber origin, the reinforcing structure, the processing performed and the reinforcement factor on the performance of composite materials, a statistical study was performed using Minitab. Thus, ANOVA was used, and the patterns of the main effects of these parameters and interaction between them were established. Statistical analysis, the fiber treatment and reinforcement structure seem to be the most significant parameters.

Keywords: flexural properties, fiber treatment, structure and weight ratio, SEM photographs, Typha leaf and stem fibers

Procedia PDF Downloads 415
594 Time-Interval between Rectal Cancer Surgery and Reintervention for Anastomotic Leakage and the Effects of a Defunctioning Stoma: A Dutch Population-Based Study

Authors: Anne-Loes K. Warps, Rob A. E. M. Tollenaar, Pieter J. Tanis, Jan Willem T. Dekker

Abstract:

Anastomotic leakage after colorectal cancer surgery remains a severe complication. Early diagnosis and treatment are essential to prevent further adverse outcomes. In the literature, it has been suggested that earlier reintervention is associated with better survival, but anastomotic leakage can occur with a highly variable time interval to index surgery. This study aims to evaluate the time-interval between rectal cancer resection with primary anastomosis creation and reoperation, in relation to short-term outcomes, stratified for the use of a defunctioning stoma. Methods: Data of all primary rectal cancer patients that underwent elective resection with primary anastomosis during 2013-2019 were extracted from the Dutch ColoRectal Audit. Analyses were stratified for defunctioning stoma. Anastomotic leakage was defined as a defect of the intestinal wall or abscess at the site of the colorectal anastomosis for which a reintervention was required within 30 days. Primary outcomes were new stoma construction, mortality, ICU admission, prolonged hospital stay and readmission. The association between time to reoperation and outcome was evaluated in three ways: Per 2 days, before versus on or after postoperative day 5 and during primary versus readmission. Results: In total 10,772 rectal cancer patients underwent resection with primary anastomosis. A defunctioning stoma was made in 46.6% of patients. These patients had a lower anastomotic leakage rate (8.2% vs. 11.6%, p < 0.001) and less often underwent a reoperation (45.3% vs. 88.7%, p < 0.001). Early reoperations (< 5 days) had the highest complication and mortality rate. Thereafter the distribution of adverse outcomes was more spread over the 30-day postoperative period for patients with a defunctioning stoma. Median time-interval from primary resection to reoperation for defunctioning stoma patients was 7 days (IQR 4-14) versus 5 days (IQR 3-13 days) for no-defunctioning stoma patients. The mortality rate after primary resection and reoperation were comparable (resp. for defunctioning vs. no-defunctioning stoma 1.0% vs. 0.7%, P=0.106 and 5.0% vs. 2.3%, P=0.107). Conclusion: This study demonstrated that early reinterventions after anastomotic leakage are associated with worse outcomes (i.e. mortality). Maybe the combination of a physiological dip in the cellular immune response and release of cytokines following surgery, as well as a release of endotoxins caused by the bacteremia originating from the leakage, leads to a more profound sepsis. Another explanation might be that early leaks are not contained to the pelvis, leading to a more profound sepsis requiring early reoperations. Leakage with or without defunctioning stoma resulted in a different type of reinterventions and time-interval between surgery and reoperation.

Keywords: rectal cancer surgery, defunctioning stoma, anastomotic leakage, time-interval to reoperation

Procedia PDF Downloads 138
593 Detection of Acrylamide Using Liquid Chromatography-Tandem Mass Spectrometry and Quantitative Risk Assessment in Selected Food from Saudi Market

Authors: Sarah A. Alotaibi, Mohammed A. Almutairi, Abdullah A. Alsayari, Adibah M. Almutairi, Somaiah K. Almubayedh

Abstract:

Concerns over the presence of acrylamide in food date back to 2002, when Swedish scientists stated that, in carbohydrate-rich foods, amounts of acrylamide were formed when cooked at high temperatures. Similar findings were reported by other researchers which, consequently, caused major international efforts to investigate dietary exposure and the subsequent health complications in order to properly manage this issue. Due to this issue, in this work, we aim to determine the acrylamide level in different foods (coffee, potato chips, biscuits, and baby food) commonly consumed by the Saudi population. In a total of forty-three samples, acrylamide was detected in twenty-three samples at levels of 12.3 to 2850 µg/kg. In reference to the food groups, the highest concentration of acrylamide was found in coffee samples (<12.3-2850 μg/kg), followed by potato chips (655-1310 μg/kg), then biscuits (23.5-449 μg/kg), whereas the lowest acrylamide level was observed in baby food (<14.75 – 126 μg/kg). Most coffee, biscuits and potato chips products contain high amount of acrylamide content and also the most commonly consumed product. Saudi adults had a mean exposure of acrylamide for coffee, potato, biscuit, and cereal (0.07439, 0.04794, 0.01125, 0.003371 µg/kg-b.w/day), respectively. On the other hand, exposure to acrylamide in Saudi infants and children to the same types of food was (0.1701, 0.1096, 0.02572, 0.00771 µg/kg-b.w/day), respectively. Most groups have a percentile that exceeds the tolerable daily intake (TDI) cancer value (2.6 µg/kg-b.w/day). Overall, the MOE results show that the Saudi population is at high risk of acrylamide-related disease in all food types, and there is a chance of cancer risk in all age groups (all values ˂10,000). Furthermore, it was found that in non-cancer risks, the acrylamide in all tested foods was within the safe limit (˃125), except for potato chips, in which there is a risk for diseases in the population. With potato and coffee as raw materials, additional studies were conducted to assess different factors, including temperature, cocking time, and additives affecting the acrylamide formation in fried potato and roasted coffee, by systematically varying processing temperatures and time values, a mitigation of acrylamide content was achieved when lowering the temperature and decreasing the cooking time. Furthermore, it was shown that the combination of the addition of chitosan and NaCl had a large impact on the formation.

Keywords: risk assessment, dietary exposure, MOA, acrylamide, hazard

Procedia PDF Downloads 58
592 Creating Moments and Memories: An Evaluation of the Starlight 'Moments' Program for Palliative Children, Adolescents and Their Families

Authors: C. Treadgold, S. Sivaraman

Abstract:

The Starlight Children's Foundation (Starlight) is an Australian non-profit organisation that delivers programs, in partnership with health professionals, to support children, adolescents, and their families who are living with a serious illness. While supporting children and adolescents with life-limiting conditions has always been a feature of Starlight's work, providing a dedicated program, specifically targeting and meeting the needs of the paediatric palliative population, is a recent area of focus. Recognising the challenges in providing children’s palliative services, Starlight initiated a research and development project to better understand and meet the needs of this group. The aim was to create a program which enhances the wellbeing of children, adolescents, and their families receiving paediatric palliative care in their community through the provision of on-going, tailored, positive experiences or 'moments'. This paper will present the results of the formative evaluation of this unique program, highlighting the development processes and outcomes of the pilot. The pilot was designed using an innovation methodology, which included a number of research components. There was a strong belief that it needed to be delivered in partnership with a dedicated palliative care team, helping to ensure the best interests of the family were always represented. This resulted in Starlight collaborating with both the Victorian Paediatric Palliative Care Program (VPPCP) at the Royal Children's Hospital, Melbourne, and the Sydney Children's Hospital Network (SCHN) to pilot the 'Moments' program. As experts in 'positive disruption', with a long history of collaborating with health professionals, Starlight was well placed to deliver a program which helps children, adolescents, and their families to experience moments of joy, connection and achieve their own sense of accomplishment. Building on Starlight’s evidence-based approach and experience in creative service delivery, the program aims to use the power of 'positive disruption' to brighten the lives of this group and create important memories. The clinical and Starlight team members collaborate to ensure that the child and family are at the centre of the program. The design of each experience is specific to their needs and ensures the creation of positive memories and family connection. It aims for each moment to enhance quality of life. The partnership with the VPPCP and SCHN has allowed the program to reach families across metropolitan and regional locations. In late 2019 a formative evaluation of the pilot was conducted utilising both quantitative and qualitative methodologies to document both the delivery and outcomes of the program. Central to the evaluation was the interviews conducted with both clinical teams and families in order to gain a comprehensive understanding of the impact of and satisfaction with the program. The findings, which will be shared in this presentation, provide practical insight into the delivery of the program, the key elements for its success with families, and areas which could benefit from additional research and focus. It will use stories and case studies from the pilot to highlight the impact of the program and discuss what opportunities, challenges, and learnings emerged.

Keywords: children, families, memory making, pediatric palliative care, support

Procedia PDF Downloads 99
591 Raman Spectroscopy of Fossil-like Feature in Sooke #1 from Vancouver Island

Authors: J. A. Sawicki, C. Ebrahimi

Abstract:

The first geochemical, petrological, X-ray diffraction, Raman, Mössbauer, and oxygen isotopic analyses of very intriguing 13-kg Sooke #1 stone covered in 70% of its surface with black fusion crust, found in and recovered from Sooke Basin, near Juan de Fuca Strait, in British Columbia, were reported as poster #2775 at LPSC52 in March. Our further analyses reported in poster #6305 at 84AMMS in August and comparisons with the Mössbauer spectra of Martian meteorite MIL03346 and Martian rocks in Gusev Crater reported by Morris et al. suggest that Sooke #1 find could be a stony achondrite of Martian polymict breccia type ejected from early watery Mars. Here, the Raman spectra of a carbon-rich ~1-mm² fossil-like white area identified in this rock on a surface of polished cut have been examined in more detail. The low-intensity 532 nm and 633 nm beams of the InviaRenishaw microscope were used to avoid any destructive effects. The beam was focused through the microscope objective to a 2 m spot on a sample, and backscattered light collected through this objective was recorded with CCD detector. Raman spectra of dark areas outside fossil have shown bands of clinopyroxene at 320, 660, and 1020 cm-1 and small peaks of forsteritic olivine at 820-840 cm-1, in agreement with results of X-ray diffraction and Mössbauer analyses. Raman spectra of the white area showed the broad band D at ~1310 cm-1 consisting of main mode A1g at 1305 cm⁻¹, E2g mode at 1245 cm⁻¹, and E1g mode at 1355 cm⁻¹ due to stretching diamond-like sp3 bonds in diamond polytype lonsdaleite, as in Ovsyuk et al. study. The band near 1600 cm-1 mostly consists of D2 band at 1620 cm-1 and not of the narrower G band at 1583 cm⁻¹ due to E2g stretching in planar sp2 bonds that are fundamental building blocks of carbon allotropes graphite and graphene. In addition, the broad second-order Raman bands were observed with 532 nm beam at 2150, ~2340, ~2500, 2650, 2800, 2970, 3140, and ~3300 cm⁻¹ shifts. Second-order bands in diamond and other carbon structures are ascribed to the combinations of bands observed in the first-order region: here 2650 cm⁻¹ as 2D, 2970 cm⁻¹ as D+G, and 3140 cm⁻¹ as 2G ones. Nanodiamonds are abundant in the Universe, found in meteorites, interplanetary dust particles, comets, and carbon-rich stars. The diamonds in meteorites are presently intensely investigated using Raman spectroscopy. Such particles can be formed by CVD process and during major impact shocks at ~1000-2300 K and ~30-40 GPa. It cannot be excluded that the fossil discovered in Sooke #1 could be a remnant of an alien carbon organism that transformed under shock impact to nanodiamonds. We trust that for the benefit of research in astro-bio-geology of meteorites, asteroids, Martian rocks, and soil, this find deserves further, more thorough investigations. If possible, the Raman SHERLOCK spectrometer operating on the Perseverance Rover should also search for such objects in the Martian rocks.

Keywords: achondrite, nanodiamonds, lonsdaleite, raman spectra

Procedia PDF Downloads 151
590 Kinematic Modelling and Task-Based Synthesis of a Passive Architecture for an Upper Limb Rehabilitation Exoskeleton

Authors: Sakshi Gupta, Anupam Agrawal, Ekta Singla

Abstract:

An exoskeleton design for rehabilitation purpose encounters many challenges, including ergonomically acceptable wearing technology, architectural design human-motion compatibility, actuation type, human-robot interaction, etc. In this paper, a passive architecture for upper limb exoskeleton is proposed for assisting in rehabilitation tasks. Kinematic modelling is detailed for task-based kinematic synthesis of the wearable exoskeleton for self-feeding tasks. The exoskeleton architecture possesses expansion and torsional springs which are able to store and redistribute energy over the human arm joints. The elastic characteristics of the springs have been optimized to minimize the mechanical work of the human arm joints. The concept of hybrid combination of a 4-bar parallelogram linkage and a serial linkage were chosen, where the 4-bar parallelogram linkage with expansion spring acts as a rigid structure which is used to provide the rotational degree-of-freedom (DOF) required for lowering and raising of the arm. The single linkage with torsional spring allows for the rotational DOF required for elbow movement. The focus of the paper is kinematic modelling, analysis and task-based synthesis framework for the proposed architecture, keeping in considerations the essential tasks of self-feeding and self-exercising during rehabilitation of partially healthy person. Rehabilitation of primary functional movements (activities of daily life, i.e., ADL) is routine activities that people tend to every day such as cleaning, dressing, feeding. We are focusing on the feeding process to make people independent in respect of the feeding tasks. The tasks are focused to post-surgery patients under rehabilitation with less than 40% weakness. The challenges addressed in work are ensuring to emulate the natural movement of the human arm. Human motion data is extracted through motion-sensors for targeted tasks of feeding and specific exercises. Task-based synthesis procedure framework will be discussed for the proposed architecture. The results include the simulation of the architectural concept for tracking the human-arm movements while displaying the kinematic and static study parameters for standard human weight. D-H parameters are used for kinematic modelling of the hybrid-mechanism, and the model is used while performing task-based optimal synthesis utilizing evolutionary algorithm.

Keywords: passive mechanism, task-based synthesis, emulating human-motion, exoskeleton

Procedia PDF Downloads 137
589 Targeting Methionine Metabolism In Gastric Cancer; Promising To Improve Chemosensetivity With Non-hetrogeneity

Authors: Nigatu Tadesse, Li Juan, Liuhong Ming

Abstract:

Gastric cancer (GC) is the fifth most common and fourth deadly cancer in the world with limited treatment options at late advanced stage in which surgical therapy is not recommended with chemotherapy remain as the mainstay of treatment. However, the occurrence of chemoresistance as well as intera-tumoral and inter-tumoral heterogeneity of response to targeted and immunotherapy underlined a clear unmet treatment need in gastroenterology. Several molecular and cellular alterations ascribed for chemo resistance in GC including cancer stem cells (CSC) and tumor microenvironment (TME) remodeling. Cancer cells including CSC bears higher metabolic demand and major changes in TME involves alterations of gut microbiota interacting with nutrients metabolism. Metabolic upregulation in lipids, carbohydrates, amino acids, fatty acids biosynthesis pathways identified as a common hall mark in GC. Metabolic addiction to methionine metabolism occurs in many cancer cells to promote the biosynthesis of S-Adenosylmethionine (SAM), a universal methyl donor molecule for high rate of transmethylation in GC and promote cell proliferation. Targeting methionine metabolism found to promotes chemo-sensitivity with treatment non-heterogeneity. Methionine restriction (MR) promoted the arrest of cell cycle at S/G2 phase and enhanced downregulation of GC cells resistance to apoptosis (including ferroptosis), which suggests the potential of synergy with chemotherapies acting at S-phase of the cell cycle as well as inducing cell apoptosis. Accumulated evidences showed both the biogenesis as well as intracellular metabolism of exogenous methionine could be safe and effective target for therapy either alone or in combination with chemotherapies. This review article provides an over view of the upregulation in methionine biosynthesis pathway and the molecular signaling through the PI3K/Akt/mTOR-c-MYC axis to promote metabolic reprograming through activating the expression of L-type aminoacid-1 (LAT1) transporter and overexpression of Methionine adenosyltransferase 2A(MAT2A) for intercellular metabolic conversion of exogenous methionine to SAM in GC, and the potential of targeting with novel therapeutic agents such as methioninase (METase), Methionine adenosyltransferase 2A (MAT2A), c-MYC, methyl like transferase 16 (METTL16) inhibitors that are currently under clinical trial development stages and future perspectives.

Keywords: gastric cancer, methionine metabolism, pi3k/akt/mtorc1-c-myc axis, gut microbiota, MAT2A, c-MYC, METTL16, methioninase

Procedia PDF Downloads 48
588 A Robust Optimization of Chassis Durability/Comfort Compromise Using Chebyshev Polynomial Chaos Expansion Method

Authors: Hanwei Gao, Louis Jezequel, Eric Cabrol, Bernard Vitry

Abstract:

The chassis system is composed of complex elements that take up all the loads from the tire-ground contact area and thus it plays an important role in numerous specifications such as durability, comfort, crash, etc. During the development of new vehicle projects in Renault, durability validation is always the main focus while deployment of comfort comes later in the project. Therefore, sometimes design choices have to be reconsidered because of the natural incompatibility between these two specifications. Besides, robustness is also an important point of concern as it is related to manufacturing costs as well as the performance after the ageing of components like shock absorbers. In this paper an approach is proposed aiming to realize a multi-objective optimization between chassis endurance and comfort while taking the random factors into consideration. The adaptive-sparse polynomial chaos expansion method (PCE) with Chebyshev polynomial series has been applied to predict responses’ uncertainty intervals of a system according to its uncertain-but-bounded parameters. The approach can be divided into three steps. First an initial design of experiments is realized to build the response surfaces which represent statistically a black-box system. Secondly within several iterations an optimum set is proposed and validated which will form a Pareto front. At the same time the robustness of each response, served as additional objectives, is calculated from the pre-defined parameter intervals and the response surfaces obtained in the first step. Finally an inverse strategy is carried out to determine the parameters’ tolerance combination with a maximally acceptable degradation of the responses in terms of manufacturing costs. A quarter car model has been tested as an example by applying the road excitations from the actual road measurements for both endurance and comfort calculations. One indicator based on the Basquin’s law is defined to compare the global chassis durability of different parameter settings. Another indicator related to comfort is obtained from the vertical acceleration of the sprung mass. An optimum set with best robustness has been finally obtained and the reference tests prove a good robustness prediction of Chebyshev PCE method. This example demonstrates the effectiveness and reliability of the approach, in particular its ability to save computational costs for a complex system.

Keywords: chassis durability, Chebyshev polynomials, multi-objective optimization, polynomial chaos expansion, ride comfort, robust design

Procedia PDF Downloads 152
587 Categorical Metadata Encoding Schemes for Arteriovenous Fistula Blood Flow Sound Classification: Scaling Numerical Representations Leads to Improved Performance

Authors: George Zhou, Yunchan Chen, Candace Chien

Abstract:

Kidney replacement therapy is the current standard of care for end-stage renal diseases. In-center or home hemodialysis remains an integral component of the therapeutic regimen. Arteriovenous fistulas (AVF) make up the vascular circuit through which blood is filtered and returned. Naturally, AVF patency determines whether adequate clearance and filtration can be achieved and directly influences clinical outcomes. Our aim was to build a deep learning model for automated AVF stenosis screening based on the sound of blood flow through the AVF. A total of 311 patients with AVF were enrolled in this study. Blood flow sounds were collected using a digital stethoscope. For each patient, blood flow sounds were collected at 6 different locations along the patient’s AVF. The 6 locations are artery, anastomosis, distal vein, middle vein, proximal vein, and venous arch. A total of 1866 sounds were collected. The blood flow sounds are labeled as “patent” (normal) or “stenotic” (abnormal). The labels are validated from concurrent ultrasound. Our dataset included 1527 “patent” and 339 “stenotic” sounds. We show that blood flow sounds vary significantly along the AVF. For example, the blood flow sound is loudest at the anastomosis site and softest at the cephalic arch. Contextualizing the sound with location metadata significantly improves classification performance. How to encode and incorporate categorical metadata is an active area of research1. Herein, we study ordinal (i.e., integer) encoding schemes. The numerical representation is concatenated to the flattened feature vector. We train a vision transformer (ViT) on spectrogram image representations of the sound and demonstrate that using scalar multiples of our integer encodings improves classification performance. Models are evaluated using a 10-fold cross-validation procedure. The baseline performance of our ViT without any location metadata achieves an AuROC and AuPRC of 0.68 ± 0.05 and 0.28 ± 0.09, respectively. Using the following encodings of Artery:0; Arch: 1; Proximal: 2; Middle: 3; Distal 4: Anastomosis: 5, the ViT achieves an AuROC and AuPRC of 0.69 ± 0.06 and 0.30 ± 0.10, respectively. Using the following encodings of Artery:0; Arch: 10; Proximal: 20; Middle: 30; Distal 40: Anastomosis: 50, the ViT achieves an AuROC and AuPRC of 0.74 ± 0.06 and 0.38 ± 0.10, respectively. Using the following encodings of Artery:0; Arch: 100; Proximal: 200; Middle: 300; Distal 400: Anastomosis: 500, the ViT achieves an AuROC and AuPRC of 0.78 ± 0.06 and 0.43 ± 0.11. respectively. Interestingly, we see that using increasing scalar multiples of our integer encoding scheme (i.e., encoding “venous arch” as 1,10,100) results in progressively improved performance. In theory, the integer values do not matter since we are optimizing the same loss function; the model can learn to increase or decrease the weights associated with location encodings and converge on the same solution. However, in the setting of limited data and computation resources, increasing the importance at initialization either leads to faster convergence or helps the model escape a local minimum.

Keywords: arteriovenous fistula, blood flow sounds, metadata encoding, deep learning

Procedia PDF Downloads 88
586 Comparing Xbar Charts: Conventional versus Reweighted Robust Estimation Methods for Univariate Data Sets

Authors: Ece Cigdem Mutlu, Burak Alakent

Abstract:

Maintaining the quality of manufactured products at a desired level depends on the stability of process dispersion and location parameters and detection of perturbations in these parameters as promptly as possible. Shewhart control chart is the most widely used technique in statistical process monitoring to monitor the quality of products and control process mean and variability. In the application of Xbar control charts, sample standard deviation and sample mean are known to be the most efficient conventional estimators in determining process dispersion and location parameters, respectively, based on the assumption of independent and normally distributed datasets. On the other hand, there is no guarantee that the real-world data would be normally distributed. In the cases of estimated process parameters from Phase I data clouded with outliers, efficiency of traditional estimators is significantly reduced, and performance of Xbar charts are undesirably low, e.g. occasional outliers in the rational subgroups in Phase I data set may considerably affect the sample mean and standard deviation, resulting a serious delay in detection of inferior products in Phase II. For more efficient application of control charts, it is required to use robust estimators against contaminations, which may exist in Phase I. In the current study, we present a simple approach to construct robust Xbar control charts using average distance to the median, Qn-estimator of scale, M-estimator of scale with logistic psi-function in the estimation of process dispersion parameter, and Harrell-Davis qth quantile estimator, Hodge-Lehmann estimator and M-estimator of location with Huber psi-function and logistic psi-function in the estimation of process location parameter. Phase I efficiency of proposed estimators and Phase II performance of Xbar charts constructed from these estimators are compared with the conventional mean and standard deviation statistics both under normality and against diffuse-localized and symmetric-asymmetric contaminations using 50,000 Monte Carlo simulations on MATLAB. Consequently, it is found that robust estimators yield parameter estimates with higher efficiency against all types of contaminations, and Xbar charts constructed using robust estimators have higher power in detecting disturbances, compared to conventional methods. Additionally, utilizing individuals charts to screen outlier subgroups and employing different combination of dispersion and location estimators on subgroups and individual observations are found to improve the performance of Xbar charts.

Keywords: average run length, M-estimators, quality control, robust estimators

Procedia PDF Downloads 190
585 The Effects of Lighting Environments on the Perception and Psychology of Consumers of Different Genders in a 3C Retail Store

Authors: Yu-Fong Lin

Abstract:

The main purpose of this study is to explore the impact of different lighting arrangements that create different visual environments in a 3C retail store on the perception, psychology, and shopping tendencies of consumers of different genders. In recent years, the ‘emotional shopping’ model has been widely accepted in the consumer market; in addition to the emotional meaning and value of a product, the in-store ‘shopping atmosphere’ has also been increasingly regarded as significant. The lighting serves as an important environmental stimulus that influences the atmosphere of a store. Altering the lighting can change the color, the shape, and the atmosphere of a space. A successful retail lighting design can not only attract consumers’ attention and generate their interest in various goods, but it can also affect consumers’ shopping approach, behavior, and desires. 3C electronic products have become mainstream in the current consumer market. Consumers of different genders may demonstrate different behaviors and preferences within a 3C store environment. This study tests the impact of a combination of lighting contrasts and color temperatures in a 3C retail store on the visual perception and psychological reactions of consumers of different genders. The research design employs an experimental method to collect data from subjects and then uses statistical analysis adhering to a 2 x 2 x 2 factorial design to identify the influences of different lighting environments. This study utilizes virtual reality technology as the primary method by which to create four virtual store lighting environments. The four lighting conditions are as follows: high contrast/cool tone, high contrast/warm tone, low contrast/cool tone, and low contrast/warm tone. Differences in the virtual lighting and the environment are used to test subjects’ visual perceptions, emotional reactions, store satisfaction, approach-avoidance intentions, and spatial atmosphere preferences. The findings of our preliminary test indicate that female subjects have a higher pleasure response than male subjects in a 3C retail store. Based on the findings of our preliminary test, the researchers modified the contents of the questionnaires and the virtual 3C retail environment with different lighting conditions in order to conduct the final experiment. The results will provide information about the effects of retail lighting on the environmental psychology and the psychological reactions of consumers of different genders in a 3C retail store lighting environment. These results will enable useful practical guidelines about creating 3C retail store lighting and atmosphere for retailers and interior designers to be established.

Keywords: 3C retail store, environmental stimuli, lighting, virtual reality

Procedia PDF Downloads 390
584 Finite Element Analysis of Layered Composite Plate with Elastic Pin Under Uniaxial Load Using ANSYS

Authors: R. M. Shabbir Ahmed, Mohamed Haneef, A. R. Anwar Khan

Abstract:

Analysis of stresses plays important role in the optimization of structures. Prior stress estimation helps in better design of the products. Composites find wide usage in the industrial and home applications due to its strength to weight ratio. Especially in the air craft industry, the usage of composites is more due to its advantages over the conventional materials. Composites are mainly made of orthotropic materials having unequal strength in the different directions. Composite materials have the drawback of delamination and debonding due to the weaker bond materials compared to the parent materials. So proper analysis should be done to the composite joints before using it in the practical conditions. In the present work, a composite plate with elastic pin is considered for analysis using finite element software Ansys. Basically the geometry is built using Ansys software using top down approach with different Boolean operations. The modelled object is meshed with three dimensional layered element solid46 for composite plate and solid element (Solid45) for pin material. Various combinations are considered to find the strength of the composite joint under uniaxial loading conditions. Due to symmetry of the problem, only quarter geometry is built and results are presented for full model using Ansys expansion options. The results show effect of pin diameter on the joint strength. Here the deflection and load sharing of the pin are increasing and other parameters like overall stress, pin stress and contact pressure are reducing due to lesser load on the plate material. Further material effect shows, higher young modulus material has little deflection, but other parameters are increasing. Interference analysis shows increasing of overall stress, pin stress, contact stress along with pin bearing load. This increase should be understood properly for increasing the load carrying capacity of the joint. Generally every structure is preloaded to increase the compressive stress in the joint to increase the load carrying capacity. But the stress increase should be properly analysed for composite due to its delamination and debonding effects due to failure of the bond materials. When results for an isotropic combination is compared with composite joint, isotropic joint shows uniformity of the results with lesser values for all parameters. This is mainly due to applied layer angle combinations. All the results are represented with necessasary pictorial plots.

Keywords: bearing force, frictional force, finite element analysis, ANSYS

Procedia PDF Downloads 334
583 Fort Conger: A Virtual Museum and Virtual Interactive World for Exploring Science in the 19th Century

Authors: Richard Levy, Peter Dawson

Abstract:

Ft. Conger, located in the Canadian Arctic was one of the most remote 19th-century scientific stations. Established in 1881 on Ellesmere Island, a wood framed structure established a permanent base from which to conduct scientific research. Under the charge of Lt. Greely, Ft. Conger was one of 14 expeditions conducted during the First International Polar Year (FIPY). Our research project “From Science to Survival: Using Virtual Exhibits to Communicate the Significance of Polar Heritage Sites in the Canadian Arctic” focused on the creation of a virtual museum website dedicated to one of the most important polar heritage site in the Canadian Arctic. This website was developed under a grant from Virtual Museum of Canada and enables visitors to explore the fort’s site from 1875 to the present, http://fortconger.org. Heritage sites are often viewed as static places. A goal of this project was to present the change that occurred over time as each new group of explorers adapted the site to their needs. The site was first visited by British explorer George Nares in 1875 – 76. Only later did the United States government select this site for the Lady Franklin Bay Expedition (1881-84) with research to be conducted under the FIPY (1882 – 83). Still later Robert Peary and Matthew Henson attempted to reach the North Pole from Ft. Conger in 1899, 1905 and 1908. A central focus of this research is on the virtual reconstruction of the Ft. Conger. In the summer of 2010, a Zoller+Fröhlich Imager 5006i and Minolta Vivid 910 laser scanner were used to scan terrain and artifacts. Once the scanning was completed, the point clouds were registered and edited to form the basis of a virtual reconstruction. A goal of this project has been to allow visitors to step back in time and explore the interior of these buildings with all of its artifacts. Links to text, historic documents, animations, panorama images, computer games and virtual labs provide explanations of how science was conducted during the 19th century. A major feature of this virtual world is the timeline. Visitors to the website can begin to explore the site when George Nares, in his ship the HMS Discovery, appeared in the harbor in 1875. With the emergence of Lt Greely’s expedition in 1881, we can track the progress made in establishing a scientific outpost. Still later in 1901, with Peary’s presence, the site is transformed again, with the huts having been built from materials salvaged from Greely’s main building. Still later in 2010, we can visit the site during its present state of deterioration and learn about the laser scanning technology which was used to document the site. The Science and Survival at Fort Conger project represents one of the first attempts to use virtual worlds to communicate the historical and scientific significance of polar heritage sites where opportunities for first-hand visitor experiences are not possible because of remote location.

Keywords: 3D imaging, multimedia, virtual reality, arctic

Procedia PDF Downloads 420
582 Combining a Continuum of Hidden Regimes and a Heteroskedastic Three-Factor Model in Option Pricing

Authors: Rachid Belhachemi, Pierre Rostan, Alexandra Rostan

Abstract:

This paper develops a discrete-time option pricing model for index options. The model consists of two key ingredients. First, daily stock return innovations are driven by a continuous hidden threshold mixed skew-normal (HTSN) distribution which generates conditional non-normality that is needed to fit daily index return. The most important feature of the HTSN is the inclusion of a latent state variable with a continuum of states, unlike the traditional mixture distributions where the state variable is discrete with little number of states. The HTSN distribution belongs to the class of univariate probability distributions where parameters of the distribution capture the dependence between the variable of interest and the continuous latent state variable (the regime). The distribution has an interpretation in terms of a mixture distribution with time-varying mixing probabilities. It has been shown empirically that this distribution outperforms its main competitor, the mixed normal (MN) distribution, in terms of capturing the stylized facts known for stock returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence. Second, heteroscedasticity in the model is captured by a threeexogenous-factor GARCH model (GARCHX), where the factors are taken from the principal components analysis of various world indices and presents an application to option pricing. The factors of the GARCHX model are extracted from a matrix of world indices applying principal component analysis (PCA). The empirically determined factors are uncorrelated and represent truly different common components driving the returns. Both factors and the eight parameters inherent to the HTSN distribution aim at capturing the impact of the state of the economy on price levels since distribution parameters have economic interpretations in terms of conditional volatilities and correlations of the returns with the hidden continuous state. The PCA identifies statistically independent factors affecting the random evolution of a given pool of assets -in our paper a pool of international stock indices- and sorting them by order of relative importance. The PCA computes a historical cross asset covariance matrix and identifies principal components representing independent factors. In our paper, factors are used to calibrate the HTSN-GARCHX model and are ultimately responsible for the nature of the distribution of random variables being generated. We benchmark our model to the MN-GARCHX model following the same PCA methodology and the standard Black-Scholes model. We show that our model outperforms the benchmark in terms of RMSE in dollar losses for put and call options, which in turn outperforms the analytical Black-Scholes by capturing the stylized facts known for index returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence.

Keywords: continuous hidden threshold, factor models, GARCHX models, option pricing, risk-premium

Procedia PDF Downloads 297
581 Poly(ε-caprolactone)/Halloysite Nanotube Nanocomposites Scaffolds for Tissue Engineering

Authors: Z. Terzopoulou, I. Koliakou, D. Bikiaris

Abstract:

Tissue engineering offers a new approach to regenerate diseased or damaged tissues such as bone. Great effort is devoted to eliminating the need of removing non-degradable implants at the end of their life span, with biodegradable polymers playing a major part. Poly(ε-caprolactone) (PCL) is one of the best candidates for this purpose due to its high permeability, good biodegradability and exceptional biocompatibility, which has stimulated extensive research into its potential application in the biomedical fields. However, PCL degrades much slower than other known biodegradable polymers and has a total degradation of 2-4 years depending on the initial molecular weight of the device. This is due to its relatively hydrophobic character and high crystallinity. Consequently, much attention has been given to the tunable degradation of PCL to meet the diverse requirements of biomedicine. Poly(ε-caprolactone) (PCL) is a biodegradable polyester that lacks bioactivity, so when used in bone tissue engineering, new bone tissue cannot bond tightly on the polymeric surface. Therefore, it is important to incorporate reinforcing fillers into PCL matrix in order to result in a promising combination of bioactivity, biodegradability, and strength. Natural clay halloysite nanotubes (HNTs) were incorporated into PCL polymeric matrix, via in situ ring-opening polymerization of caprolactone, in concentrations 0.5, 1 and 2.5 wt%. Both unmodified and modified with aminopropyltrimethoxysilane (APTES) HNTs were used in this study. The effect of nanofiller concentration and functionalization with end-amino groups on the physicochemical properties of the prepared nanocomposites was studied. Mechanical properties were found enhanced after the incorporation of nanofillers, while the modification increased further the values of tensile and impact strength. Thermal stability of PCL was not affected by the presence of nanofillers, while the crystallization rate that was studied by Differential Scanning Calorimetry (DSC) and Polarized Light Optical Microscopy (POM) increased. All materials were subjected to enzymatic hydrolysis in phosphate buffer in the presence of lipases. Due to the hydrophilic nature of HNTs, the biodegradation rate of nanocomposites was higher compared to neat PCL. In order to confirm the effect of hydrophilicity, contact angle measurements were also performed. In vitro biomineralization test confirmed that all samples were bioactive as mineral deposits were detected by X-ray diffractometry after incubation in SBF. All scaffolds were tested in relevant cell culture using osteoblast-like cells (MG-63) to demonstrate their biocompatibility

Keywords: biomaterials, nanocomposites, scaffolds, tissue engineering

Procedia PDF Downloads 316
580 Self-Assembling Layered Double Hydroxide Nanosheets on β-FeOOH Nanorods for Reducing Fire Hazards of Epoxy Resin

Authors: Wei Wang, Yuan Hu

Abstract:

Epoxy resins (EP), one of the most important thermosetting polymers, is widely applied in various fields due to its desirable properties, such as excellent electrical insulation, low shrinkage, outstanding mechanical stiffness, satisfactory adhesion and solvent resistance. However, like most of the polymeric materials, EP has the fatal drawbacks including inherent flammability and high yield of toxic smoke, which restricts its application in the fields requiring fire safety. So, it is still a challenge and an interesting subject to develop new flame retardants which can not only remarkably improve the flame retardancy, but also render modified resins low toxic gases generation. In recent work, polymer nanocomposites based on nanohybrids that contain two or more kinds of nanofillers have drawn intensive interest, which can realize performance enhancements. The realization of previous hybrids of carbon nanotubes (CNTs) and molybdenum disulfide provides us a novel route to decorate layered double hydroxide (LDH) nanosheets on the surface of β-FeOOH nanorods; the deposited LDH nanosheets can fill the network and promote the work efficiency of β-FeOOH nanorods. Moreover, the synergistic effects between LDH and β-FeOOH can be anticipated to have potential applications in reducing fire hazards of EP composites for the combination of condense-phase and gas-phase mechanism. As reported, β-FeOOH nanorods can act as a core to prepare hybrid nanostructures combining with other nanoparticles through electrostatic attraction through layer-by-layer assembly technique. In this work, LDH nanosheets wrapped β-FeOOH nanorods (LDH-β-FeOOH) hybrids was synthesized by a facile method, with the purpose of combining the characteristics of one dimension (1D) and two dimension (2D), to improve the fire resistance of epoxy resin. The hybrids showed a well dispersion in EP matrix and had no obvious aggregation. Thermogravimetric analysis and cone calorimeter tests confirmed that LDH-β-FeOOH hybrids into EP matrix with a loading of 3% could obviously improve the fire safety of EP composites. The plausible flame retardancy mechanism was explored by thermogravimetric infrared (TG-IR) and X-ray photoelectron spectroscopy. The reasons were concluded: condense-phase and gas-phase. Nanofillers were transferred to the surface of matrix during combustion, which could not only shield EP matrix from external radiation and heat feedback from the fire zone, but also efficiently retard transport of oxygen and flammable pyrolysis.

Keywords: fire hazards, toxic gases, self-assembly, epoxy

Procedia PDF Downloads 174
579 An Investigation into Enablers and Barriers of Reverse Technology Transfer

Authors: Nirmal Kundu, Chandan Bhar, Visveswaran Pandurangan

Abstract:

Technology is the most valued possession for a country or an organization. The economic development depends not on stock of technology but on the capabilities how the technology is being exploited. The technology transfer is the best way how the developing countries have an access to state-of- the-art technology. Traditional technology transfer is a unidirectional phenomenon where technology is transferred from developed to developing countries. But now there is a change of wind. There is a general agreement that global shift of economic power is under way from west to east. As China and India are making the transition from users to producers, and producers to innovators, this has increasing important implications on economy, technology and policy of global trade. As a result, Reverse technology transfer has become a phenomenon and field of study in technology management. The term “Reverse Technology Transfer” is not well defined. Initially the concept of Reverse technology transfer was associated with the phenomenon of “Brain drain” from developing to developed countries. In the second phase, Reverse Technology Transfer was associated with the transfer of knowledge and technology from subsidiaries to multinationals. Finally, time has come now to extend the concept of reverse technology transfer to two different organizations or countries related or unrelated by traditional technology transfer but the transfer or has essentially received the technology through traditional mode of technology transfer. The objective of this paper is to study; 1) the present status of Reverse technology transfer, 2) the factors which are the enablers and barriers of Reverse technology transfer and 3) how the reverse technology transfer strategy can be integrated in the technology policy of a country which will give the countries an economic boost. The research methodology used in this study is a combination of literature review, case studies and key informant interviews. The literature review includes both published as well as unpublished sources of literature. In case study, attempt has been made to study the records of reverse technology transfer that have been occurred in developing countries. In case of key informant interviews, informal telephonic discussions have been carried out with the key executives of the organizations (industry, university and research institutions) who are actively engaged in the process of technology transfer- traditional as well as reverse. Reverse technology transfer is possible only by creating technological capabilities. Following four important enablers coupled with government active and aggressive action can help to build technology base to reach to the goal of Reverse technology transfer 1) Imitation to innovation, 2) Reverse engineering, 3) Collaborative R & D approach, and 4) Preventing reverse brain drain. The barriers that come in the way are the mindset of over dependence, over subordination and parent–child attitude (not adult attitude). Exploitation of these enablers and overcoming the barriers of reverse technology transfer, the developing countries like India and China can prove that going “reverse” is the best way to move forward and again establish themselves as leader of the future world.

Keywords: barriers of reverse technology transfer, enablers of reverse technology transfer, knowledge transfer, reverse technology transfer, technology transfer

Procedia PDF Downloads 399