Search results for: combination rule
553 Application of Deep Learning and Ensemble Methods for Biomarker Discovery in Diabetic Nephropathy through Fibrosis and Propionate Metabolism Pathways
Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei
Abstract:
Diabetic nephropathy (DN) is a major complication of diabetes, with fibrosis and propionate metabolism playing critical roles in its progression. Identifying biomarkers linked to these pathways may provide novel insights into DN diagnosis and treatment. This study aims to identify biomarkers associated with fibrosis and propionate metabolism in DN. Analyze the biological pathways and regulatory mechanisms of these biomarkers. Develop a machine learning model to predict DN-related biomarkers and validate their functional roles. Publicly available transcriptome datasets related to DN (GSE96804 and GSE104948) were obtained from the GEO database (https://www.ncbi.nlm.nih.gov/gds), and 924 propionate metabolism-related genes (PMRGs) and 656 fibrosis-related genes (FRGs) were identified. The analysis began with the extraction of DN-differentially expressed genes (DN-DEGs) and propionate metabolism-related DEGs (PM-DEGs), followed by the intersection of these with fibrosis-related genes to identify key intersected genes. Instead of relying on traditional models, we employed a combination of deep neural networks (DNNs) and ensemble methods such as Gradient Boosting Machines (GBM) and XGBoost to enhance feature selection and biomarker discovery. Recursive feature elimination (RFE) was coupled with these advanced algorithms to refine the selection of the most critical biomarkers. Functional validation was conducted using convolutional neural networks (CNN) for gene set enrichment and immunoinfiltration analysis, revealing seven significant biomarkers—SLC37A4, ACOX2, GPD1, ACE2, SLC9A3, AGT, and PLG. These biomarkers are involved in critical biological processes such as fatty acid metabolism and glomerular development, providing a mechanistic link to DN progression. Furthermore, a TF–miRNA–mRNA regulatory network was constructed using natural language processing models to identify 8 transcription factors and 60 miRNAs that regulate these biomarkers, while a drug–gene interaction network revealed potential therapeutic targets such as UROKINASE–PLG and ATENOLOL–AGT. This integrative approach, leveraging deep learning and ensemble models, not only enhances the accuracy of biomarker discovery but also offers new perspectives on DN diagnosis and treatment, specifically targeting fibrosis and propionate metabolism pathways.Keywords: diabetic nephropathy, deep neural networks, gradient boosting machines (GBM), XGBoost
Procedia PDF Downloads 9552 Photocatalytic Disintegration of Naphthalene and Naphthalene Similar Compounds in Indoors Air
Authors: Tobias Schnabel
Abstract:
Naphthalene and naphthalene similar compounds are a common problem in the indoor air of buildings from the 1960s and 1970s in Germany. Often tar containing roof felt was used under the concrete floor to prevent humidity to come through the floor. This tar containing roof felt has high concentrations of PAH (Polycyclic aromatic hydrocarbon) and naphthalene. Naphthalene easily evaporates and contaminates the indoor air. Especially after renovations and energetically modernization of the buildings, the naphthalene concentration rises because no forced air exchange can happen. Because of this problem, it is often necessary to change the floors after renovation of the buildings. The MFPA Weimar (Material research and testing facility) developed in cooperation a project with LEJ GmbH and Reichmann Gebäudetechnik GmbH. It is a technical solution for the disintegration of naphthalene in naphthalene, similar compounds in indoor air with photocatalytic reforming. Photocatalytic systems produce active oxygen species (hydroxyl radicals) through trading semiconductors on a wavelength of their bandgap. The light energy separates the charges in the semiconductor and produces free electrons in the line tape and defect electrons. The defect electrons can react with hydroxide ions to hydroxyl radicals. The produced hydroxyl radicals are a strong oxidation agent, and can oxidate organic matter to carbon dioxide and water. During the research, new titanium oxide catalysator surface coatings were developed. This coating technology allows the production of very porous titan oxide layer on temperature stable carrier materials. The porosity allows the naphthalene to get easily absorbed by the surface coating, what accelerates the reaction of the heterogeneous photocatalysis. The photocatalytic reaction is induced by high power and high efficient UV-A (ultra violet light) Leds with a wavelength of 365nm. Various tests in emission chambers and on the reformer itself show that a reduction of naphthalene in important concentrations between 2 and 250 µg/m³ is possible. The disintegration rate was at least 80%. To reduce the concentration of naphthalene from 30 µg/m³ to a level below 5 µg/m³ in a usual 50 ² classroom, an energy of 6 kWh is needed. The benefits of the photocatalytic indoor air treatment are that every organic compound in the air can be disintegrated and reduced. The use of new photocatalytic materials in combination with highly efficient UV leds make a safe and energy efficient reduction of organic compounds in indoor air possible. At the moment the air cleaning systems take the step from prototype stage into the usage in real buildings.Keywords: naphthalene, titandioxide, indoor air, photocatalysis
Procedia PDF Downloads 143551 Redirecting Photosynthetic Electron Flux in the Engineered Cyanobacterium synechocystis Sp. Pcc 6803 by the Deletion of Flavodiiron Protein Flv3
Authors: K. Thiel, P. Patrikainen, C. Nagy, D. Fitzpatrick, E.-M. Aro, P. Kallio
Abstract:
Photosynthetic cyanobacteria have been recognized as potential future biotechnological hosts for the direct conversion of CO₂ into chemicals of interest using sunlight as the solar energy source. However, in order to develop commercially viable systems, the flux of electrons from the photosynthetic light reactions towards specified target chemicals must be significantly improved. The objective of the study was to investigate whether the autotrophic production efficiency of specified end-metabolites can be improved in engineered cyanobacterial cells by rescuing excited electrons that are normally lost to molecular oxygen due to the cyanobacterial flavodiiron protein Flv1/3. Natively Flv1/3 dissipates excess electrons in the photosynthetic electron transfer chain by directing them to molecular oxygen in Mehler-like reaction to protect photosystem I. To evaluate the effect of flavodiiron inactivation on autotrophic production efficiency in the cyanobacterial host Synechocystis sp. PCC 6803 (Synechocystis), sucrose was selected as the quantitative reporter and a representative of a potential end-product of interest. The concept is based on the native property of Synechocystis to produce sucrose as an intracellular osmoprotectant when exposed to high external ion concentrations, in combination with the introduction of a heterologous sucrose permease (CscB from Escherichia coli), which transports the sucrose out from the cell. In addition, cell growth, photosynthetic gas fluxes using membrane inlet mass spectrometry and endogenous storage compounds were analysed to illustrate the consequent effects of flv deletion on pathway flux distributions. The results indicate that a significant proportion of the electrons can be lost to molecular oxygen via Flv1/3 even when the cells are grown under high CO₂ and that the inactivation of flavodiiron activity can enhance the photosynthetic electron flux towards optionally available sinks. The flux distribution is dependent on the light conditions and the genetic context of the Δflv mutants, and favors the production of either sucrose or one of the two storage compounds, glycogen or polyhydroxybutyrate. As a conclusion, elimination of the native Flv1/3 reaction and concomitant introduction of an engineered product pathway as an alternative sink for excited electrons could enhance the photosynthetic electron flux towards the target endproduct without compromising the fitness of the host.Keywords: cyanobacterial engineering, flavodiiron proteins, redirecting electron flux, sucrose
Procedia PDF Downloads 125550 Field Environment Sensing and Modeling for Pears towards Precision Agriculture
Authors: Tatsuya Yamazaki, Kazuya Miyakawa, Tomohiko Sugiyama, Toshitaka Iwatani
Abstract:
The introduction of sensor technologies into agriculture is a necessary step to realize Precision Agriculture. Although sensing methodologies themselves have been prevailing owing to miniaturization and reduction in costs of sensors, there are some difficulties to analyze and understand the sensing data. Targeting at pears ’Le Lectier’, which is particular to Niigata in Japan, cultivation environmental data have been collected at pear fields by eight sorts of sensors: field temperature, field humidity, rain gauge, soil water potential, soil temperature, soil moisture, inner-bag temperature, and inner-bag humidity sensors. With regard to the inner-bag temperature and humidity sensors, they are used to measure the environment inside the fruit bag used for pre-harvest bagging of pears. In this experiment, three kinds of fruit bags were used for the pre-harvest bagging. After over 100 days continuous measurement, volumes of sensing data have been collected. Firstly, correlation analysis among sensing data measured by respective sensors reveals that one sensor can replace another sensor so that more efficient and cost-saving sensing systems can be proposed to pear farmers. Secondly, differences in characteristic and performance of the three kinds of fruit bags are clarified by the measurement results by the inner-bag environmental sensing. It is found that characteristic and performance of the inner-bags significantly differ from each other by statistical analysis. Lastly, a relational model between the sensing data and the pear outlook quality is established by use of Structural Equation Model (SEM). Here, the pear outlook quality is related with existence of stain, blob, scratch, and so on caused by physiological impair or diseases. Conceptually SEM is a combination of exploratory factor analysis and multiple regression. By using SEM, a model is constructed to connect independent and dependent variables. The proposed SEM model relates the measured sensing data and the pear outlook quality determined on the basis of farmer judgement. In particularly, it is found that the inner-bag humidity variable relatively affects the pear outlook quality. Therefore, inner-bag humidity sensing might help the farmers to control the pear outlook quality. These results are supported by a large quantity of inner-bag humidity data measured over the years 2014, 2015, and 2016. The experimental and analytical results in this research contribute to spreading Precision Agriculture technologies among the farmers growing ’Le Lectier’.Keywords: precision agriculture, pre-harvest bagging, sensor fusion, structural equation model
Procedia PDF Downloads 314549 Enhancement of Shelflife of Malta Fruit with Active Packaging
Authors: Rishi Richa, N. C. Shahi, J. P. Pandey, S. S. Kautkar
Abstract:
Citrus fruits rank third in area and production after banana and mango in India. Sweet oranges are the second largest citrus fruits cultivated in the country. Andhra Pradesh, Maharashtra, Karnataka, Punjab, Haryana, Rajasthan, and Uttarakhand are the main sweet orange-growing states. Citrus fruits occupy a leading position in the fruit trade of Uttarakhand, is casing about 14.38% of the total area under fruits and contributing nearly 17.75 % to the total fruit production. Malta is grown in most of the hill districts of the Uttarakhand. Malta common is having high acceptability due to its attractive colour, distinctive flavour, and taste. The excellent quality fruits are generally available for only one or two months. However due to its less shelf-life, Malta can not be stored for longer time under ambient conditions and cannot be transported to distant places. Continuous loss of water adversely affects the quality of Malta during storage and transportation. Method of picking, packaging, and cold storage has detrimental effects on moisture loss. The climatic condition such as ambient temperature, relative humidity, wind condition (aeration) and microbial attack greatly influences the rate of moisture loss and quality. Therefore, different agro-climatic zone will have different moisture loss pattern. The rate of moisture loss can be taken as one of the quality parameters in combination of one or more parameter such as RH, and aeration. The moisture contents of the fruits and vegetables determine their freshness. Hence, it is important to maintain initial moisture status of fruits and vegetable for prolonged period after the harvest. Keeping all points in views, effort was made to store Malta at ambient condition. In this study, the response surface method and experimental design were applied for optimization of independent variables to enhance the shelf life of four months stored malta. Box-Benkhen design, with, 12 factorial points and 5 replicates at the centre point were used to build a model for predicting and optimizing storage process parameters. The independent parameters, viz., scavenger (3, 4 and 5g), polythene thickness (75, 100 and 125 gauge) and fungicide concentration (100, 150 and 200ppm) were selected and analyzed. 5g scavenger, 125 gauge and 200ppm solution of fungicide are the optimized value for storage which may enhance life up to 4months.Keywords: Malta fruit, scavenger, packaging, shelf life
Procedia PDF Downloads 280548 Predicting Recessions with Bivariate Dynamic Probit Model: The Czech and German Case
Authors: Lukas Reznak, Maria Reznakova
Abstract:
Recession of an economy has a profound negative effect on all involved stakeholders. It follows that timely prediction of recessions has been of utmost interest both in the theoretical research and in practical macroeconomic modelling. Current mainstream of recession prediction is based on standard OLS models of continuous GDP using macroeconomic data. This approach is not suitable for two reasons: the standard continuous models are proving to be obsolete and the macroeconomic data are unreliable, often revised many years retroactively. The aim of the paper is to explore a different branch of recession forecasting research theory and verify the findings on real data of the Czech Republic and Germany. In the paper, the authors present a family of discrete choice probit models with parameters estimated by the method of maximum likelihood. In the basic form, the probits model a univariate series of recessions and expansions in the economic cycle for a given country. The majority of the paper deals with more complex model structures, namely dynamic and bivariate extensions. The dynamic structure models the autoregressive nature of recessions, taking into consideration previous economic activity to predict the development in subsequent periods. Bivariate extensions utilize information from a foreign economy by incorporating correlation of error terms and thus modelling the dependencies of the two countries. Bivariate models predict a bivariate time series of economic states in both economies and thus enhance the predictive performance. A vital enabler of timely and successful recession forecasting are reliable and readily available data. Leading indicators, namely the yield curve and the stock market indices, represent an ideal data base, as the pieces of information is available in advance and do not undergo any retroactive revisions. As importantly, the combination of yield curve and stock market indices reflect a range of macroeconomic and financial market investors’ trends which influence the economic cycle. These theoretical approaches are applied on real data of Czech Republic and Germany. Two models for each country were identified – each for in-sample and out-of-sample predictive purposes. All four followed a bivariate structure, while three contained a dynamic component.Keywords: bivariate probit, leading indicators, recession forecasting, Czech Republic, Germany
Procedia PDF Downloads 248547 The Decision-Making Process of the Central Banks of Brazil and India in Regional Integration: A Comparative Analysis of MERCOSUR and SAARC (2003-2014)
Authors: Andre Sanches Siqueira Campos
Abstract:
Central banks can play a significant role in promoting regional economic and monetary integration by strengthening the payment and settlement systems. However, close coordination and cooperation require facilitating the implementation of reforms at domestic and cross-border levels in order to benchmark with international standards and commitments to the liberal order. This situation reflects the normative power of the regulatory globalization dimension of strong states, which may drive or constrain regional integration. In the MERCOSUR and SAARC regions, central banks have set financial initiatives that could facilitate South America and South Asia regions to move towards convergence integration and facilitate trade and investments connectivities. This is qualitative method research based on a combination of the Process-Tracing method with Qualitative Comparative Analysis (QCA). This research approaches multiple forms of data based on central banks, regional organisations, national governments, and financial institutions supported by existing literature. The aim of this research is to analyze the decision-making process of the Central Bank of Brazil (BCB) and the Reserve Bank of India (RBI) towards regional financial cooperation by identifying connectivity instruments that foster, gridlock, or redefine cooperation. The BCB and The RBI manage the monetary policy of the largest economies of those regions, which makes regional cooperation a relevant framework to understand how they provide an effective institutional arrangement for regional organisations to achieve some of their key policies and economic objectives. The preliminary conclusion is that both BCB and RBI demonstrate a reluctance to deepen regional cooperation because of the existing economic, political, and institutional asymmetries. Deepening regional cooperation is constrained by the interests of central banks in protecting their economies from risks of instability due to different degrees of development between countries in their regions and international financial crises that have impacted the international system in the 21st century. Reluctant regional integration also provides autonomy for national development and political ground for the contestation of Global Financial Governance by Brazil and India.Keywords: Brazil, central banks, decision-making process, global financial governance, India, MERCOSUR, connectivity, payment system, regional cooperation, SAARC
Procedia PDF Downloads 114546 An Institutional Mapping and Stakeholder Analysis of ASEAN’s Preparedness for Nuclear Power Disaster
Authors: Nur Azha Putra Abdul Azim, Denise Cheong, S. Nivedita
Abstract:
Currently, there are no nuclear power reactors among the Association of Southeast Asian Nations (ASEAN) member states (AMS) but there are seven operational nuclear research reactors, and Indonesia is about to construct the region’s first experimental power reactor by the end of the decade. If successful, the experimental power reactor will lay the foundation for the country’s and region’s first nuclear power plant. Despite projecting confidence during the period of nuclear power renaissance in the region in the last decade, none of the AMS has committed to a political decision on the use of nuclear energy and this is largely due to the Fukushima nuclear power accident in 2011. Of the ten AMS, Vietnam, Indonesia and Malaysia have demonstrated the most progress in developing nuclear energy based on the nuclear power infrastructure development assessments made by the International Atomic Energy Agency. Of these three states, Vietnam came closest to building its first nuclear power plant but decided to delay construction further due to safety and security concerns. Meanwhile, Vietnam along with Indonesia and Malaysia continue with their nuclear power infrastructure development and the remaining SEA states, with the exception of Brunei and Singapore, continue to build their expertise and capacity for nuclear power energy. At the current rate of progress, Indonesia is expected to make a national decision on the use of nuclear power by 2023 while Malaysia, the Philippines, and Thailand have included the use of nuclear power in their mid to long-term power development plans. Vietnam remains open to nuclear power but has not placed a timeline. The medium to short-term power development projection in the region suggests that the use of nuclear energy in the region is a matter of 'when' rather than 'if'. In lieu of the prospects for nuclear energy in Southeast Asia (SEA), this presentation will review the literature on ASEAN radiological emergency and preparedness response (EPR) plans and examine ASEAN’s disaster management and emergency framework. Through a combination of institutional mapping and stakeholder analysis methods, which we examine in the context of the international EPR, and nuclear safety and security regimes, we will identify the issues and challenges in developing a regional radiological EPR framework in the SEA. We will conclude with the observation that ASEAN faces serious structural, institutional and governance challenges due to the AMS inherent political structures and history of interstate conflicts, and propose that ASEAN should either enlarge the existing scope of its disaster management and response framework or that its radiological EPR framework should exist as a separate entity.Keywords: nuclear power, nuclear accident, ASEAN, Southeast Asia
Procedia PDF Downloads 153545 Attitudes Towards the Supernatural in Benjamin Britten’s The Turn of the Screw
Authors: Yaou Zhang
Abstract:
Background: Relatively little scholarly attention has been paid to the production of Benjamin Britten’s chamber opera The Turn of the Screw. As one of Britten’s most remarkable operas. The story of the libretto was from Henry James’s novella of the same name. The novella was created in 1898 and one of the primary questions addressed to people in the story is “how real the ghosts are,” which leads the story to a huge ambiguity in readers’ minds. Aims: This research focuses on the experience of seeing the opera on stage over several decades. This study of opera productions over time not only provides insight into how stage performances can alter audience members' perceptions of the opera in the present but also reveals a landscape of shifting aesthetics and receptions. Methods: To examine the hypotheses in interpretation and reception, the qualitative analysis is used to examine the figures of ghosts in different productions across the time from 1954 to 2021 in the UK: by accessing recordings, newspapers, and reviews for the productions that are sourced from online and physical archives. For instance, the field research is conducted on the topic by arranging interviews with the creative team and visiting Opera North in Leeds and Britten-Pears Foundation. The collected data reveals the “hidden identity” in creative teams’ interpretations, social preferences, and rediscover that have previously remained unseen. Results: This research presents an angle of Britten’s Screw by using the third position; it shows how the attention moved from the stage of “do the ghosts really exist” to “traumatised children.” Discussion: Critics and audiences have debated whether the governess hallucinates the ghosts in the opera for decades. While, in recent years, directors of new productions have given themselves the opportunity to go deeper into Britten's musical structure and offer the opera more space to be interpreted, rather than debating if "ghosts actually exist" or "the psychological problems of the governess." One can consider and reflect that the questionable actions of the children are because they are suffering from trauma, whether the trauma comes from the ghosts, the hallucinating governess, or some prior experiences: various interpretations cause one result that children are the recipients of trauma. Arguably, the role of the supernatural is neither simply one of the elements of a ghost story nor simply one of the parts of the ambiguity between the supernatural and the hallucination of the governess; rather, the ghosts and the hallucinating governess can exist at the same time - the combination of the supernatural’s and the governess’s behaviours on stage generates a sharper and more serious angle that draws our attention to the traumatized children.Keywords: benjamin britten, chamber opera, production, reception, staging, the turn of the screw
Procedia PDF Downloads 108544 Mathematical Modelling of Biogas Dehumidification by Using of Counterflow Heat Exchanger
Authors: Staņislavs Gendelis, Andris Jakovičs, Jānis Ratnieks, Aigars Laizāns, Dāvids Vardanjans
Abstract:
Dehumidification of biogas at the biomass plants is very important to provide the energy efficient burning of biomethane at the outlet. A few methods are widely used to reduce the water content in biogas, e.g. chiller/heat exchanger based cooling, usage of different adsorbents like PSA, or the combination of such approaches. A quite different method of biogas dehumidification is offered and analyzed in this paper. The main idea is to direct the flow of biogas from the plant around it downwards; thus, creating additional insulation layer. As the temperature in gas shell layer around the plant will decrease from ~ 38°C to 20°C in the summer or even to 0°C in the winter, condensation of water vapor occurs. The water from the bottom of the gas shell can be collected and drain away. In addition, another upward shell layer is created after the condensate drainage place on the outer side to further reducing heat losses. Thus, counterflow biogas heat exchanger is created around the biogas plant. This research work deals with the numerical modelling of biogas flow, taking into account heat exchange and condensation on cold surfaces. Different kinds of boundary conditions (air and ground temperatures in summer/winter) and various physical properties of constructions (insulation between layers, wall thickness) are included in the model to make it more general and useful for different biogas flow conditions. The complexity of this problem is fact, that the temperatures in both channels are conjugated in case of low thermal resistance between layers. MATLAB programming language is used for multiphysical model development, numerical calculations and result visualization. Experimental installation of a biogas plant’s vertical wall with an additional 2 layers of polycarbonate sheets with the controlled gas flow was set up to verify the modelling results. Gas flow at inlet/outlet, temperatures between the layers and humidity were controlled and measured during a number of experiments. Good correlation with modelling results for vertical wall section allows using of developed numerical model for an estimation of parameters for the whole biogas dehumidification system. Numerical modelling of biogas counterflow heat exchanger system placed on the plant’s wall for various cases allows optimizing of thickness for gas layers and insulation layer to ensure necessary dehumidification of the gas under different climatic conditions. Modelling of system’s defined configuration with known conditions helps to predict the temperature and humidity content of the biogas at the outlet.Keywords: biogas dehumidification, numerical modelling, condensation, biogas plant experimental model
Procedia PDF Downloads 550543 Meeting the Energy Balancing Needs in a Fully Renewable European Energy System: A Stochastic Portfolio Framework
Authors: Iulia E. Falcan
Abstract:
The transition of the European power sector towards a clean, renewable energy (RE) system faces the challenge of meeting power demand in times of low wind speed and low solar radiation, at a reasonable cost. This is likely to be achieved through a combination of 1) energy storage technologies, 2) development of the cross-border power grid, 3) installed overcapacity of RE and 4) dispatchable power sources – such as biomass. This paper uses NASA; derived hourly data on weather patterns of sixteen European countries for the past twenty-five years, and load data from the European Network of Transmission System Operators-Electricity (ENTSO-E), to develop a stochastic optimization model. This model aims to understand the synergies between the four classes of technologies mentioned above and to determine the optimal configuration of the energy technologies portfolio. While this issue has been addressed before, it was done so using deterministic models that extrapolated historic data on weather patterns and power demand, as well as ignoring the risk of an unbalanced grid-risk stemming from both the supply and the demand side. This paper aims to explicitly account for the inherent uncertainty in the energy system transition. It articulates two levels of uncertainty: a) the inherent uncertainty in future weather patterns and b) the uncertainty of fully meeting power demand. The first level of uncertainty is addressed by developing probability distributions for future weather data and thus expected power output from RE technologies, rather than known future power output. The latter level of uncertainty is operationalized by introducing a Conditional Value at Risk (CVaR) constraint in the portfolio optimization problem. By setting the risk threshold at different levels – 1%, 5% and 10%, important insights are revealed regarding the synergies of the different energy technologies, i.e., the circumstances under which they behave as either complements or substitutes to each other. The paper concludes that allowing for uncertainty in expected power output - rather than extrapolating historic data - paints a more realistic picture and reveals important departures from results of deterministic models. In addition, explicitly acknowledging the risk of an unbalanced grid - and assigning it different thresholds - reveals non-linearity in the cost functions of different technology portfolio configurations. This finding has significant implications for the design of the European energy mix.Keywords: cross-border grid extension, energy storage technologies, energy system transition, stochastic portfolio optimization
Procedia PDF Downloads 170542 Sequential Padding: A Method to Improve the Impact Resistance in Body Armor Materials
Authors: Ankita Srivastava, Bhupendra S. Butola, Abhijit Majumdar
Abstract:
Application of shear thickening fluid (STF) has been proved to increase the impact resistance performance of the textile structures to further use it as a body armor material. In the present research, STF was applied on Kevlar woven fabric to make the structure lightweight and flexible while improving its impact resistance performance. It was observed that getting a fair amount of add-on of STF on Kevlar fabric is difficult as Kevlar fabric comes with a pre-coating of PTFE which hinders its absorbency. Hence, a method termed as sequential padding is developed in the present study to improve the add-on of STF on Kevlar fabric. Contrary to the conventional process, where Kevlar fabric is treated with STF once using any one pressure, in sequential padding method, the Kevlar fabrics were treated twice in a sequential manner using combination of two pressures together in a sample. 200 GSM Kevlar fabrics were used in the present study. STF was prepared by adding PEG with 70% (w/w) nano-silica concentration. Ethanol was added with the STF at a fixed ratio to reduce viscosity. A high-speed homogenizer was used to make the dispersion. Total nine STF treated Kevlar fabric samples were prepared by using varying combinations and sequences of three levels of padding pressure {0.5, 1.0 and 2.0 bar). The fabrics were dried at 80°C for 40 minutes in a hot air oven to evaporate ethanol. Untreated and STF treated fabrics were tested for add-on%. Impact resistance performance of samples was also tested on dynamic impact tester at a fixed velocity of 6 m/s. Further, to observe the impact resistance performance in actual condition, low velocity ballistic test with 165 m/s velocity was also performed to confirm the results of impact resistance test. It was observed that both add-on% and impact energy absorption of Kevlar fabrics increases significantly with sequential padding process as compared to untreated as well as single stage padding process. It was also determined that impact energy absorption is significantly better in STF treated Kevlar fabrics when 1st padding pressure is higher, and 2nd padding pressure is lower. It is also observed that impact energy absorption of sequentially padded Kevlar fabric shows almost 125% increase in ballistic impact energy absorption (40.62 J) as compared to untreated fabric (18.07 J).The results are owing to the fact that the treatment of fabrics at high pressure during the first padding is responsible for uniform distribution of STF within the fabric structures. While padding with second lower pressure ensures the high add-on of STF for over-all improvement in the impact resistance performance of the fabric. Therefore, it is concluded that sequential padding process may help to improve the impact performance of body armor materials based on STF treated Kevlar fabrics.Keywords: body armor, impact resistance, Kevlar, shear thickening fluid
Procedia PDF Downloads 241541 Anaerobic Digestion of Green Wastes at Different Solids Concentrations and Temperatures to Enhance Methane Generation
Authors: A. Bayat, R. Bello-Mendoza, D. G. Wareham
Abstract:
Two major categories of green waste are fruit and vegetable (FV) waste and garden and yard (GY) waste. Although, anaerobic digestions (AD) is able to manage FV waste; there is less confidence in the conditions for AD to handle GY wastes (grass, leaves, trees and bush trimmings); mainly because GY contains lignin and other recalcitrant organics. GY in the dry state (TS ≥ 15 %) can be digested at mesophilic temperatures; however, little methane data has been reported under thermophilic conditions, where conceivably better methane yields could be achieved. In addition, it is suspected that at lower solids concentrations, the methane yield could be increased. As such, the aim of this research is to find the temperature and solids concentration conditions that produce the most methane; under two different temperature regimes (mesophilic, thermophilic) and three solids states (i.e. 'dry', 'semi-dry' and 'wet'). Twenty liters of GY waste was collected from a public park located in the northern district in Tehran. The clippings consisted of freshly cut grass as well as dry branches and leaves. The GY waste was chopped before being fed into a mechanical blender that reduced it to a paste-like consistency. An initial TS concentration of approximately 38 % was achieved. Four hundred mL of anaerobic inoculum (average total solids (TS) concentration of 2.03 ± 0.131 % of which 73.4% were volatile solid (VS), soluble chemical oxygen demand (sCOD) of 4.59 ± 0.3 g/L) was mixed with the GY waste substrate paste (along with distilled water) to achieve a TS content of approximately 20 %. For comparative purposes, approximately 20 liters of FV waste was ground in the same manner as the GY waste. Since FV waste has a much higher natural water content than GY, it was dewatered to obtain a starting TS concentration in the dry solid-state range (TS ≥ 15 %). Three samples were dewatered to an average starting TS concentration of 32.71 %. The inoculum was added (along with distilled water) to dilute the initial FV TS concentrations down to semi-dry conditions (10-15 %) and wet conditions (below 10 %). Twelve 1-L batch bioreactors were loaded simultaneously with either GY or FV waste at TS solid concentrations ranging from 3.85 ± 1.22 % to 20.11 ± 1.23 %. The reactors were sealed and were operated for 30 days while being immersed in water baths to maintain a constant temperature of 37 ± 0.5 °C (mesophilic) or 55 ± 0.5 °C (thermophilic). A maximum methane yield of 115.42 (L methane/ kg VS added) was obtained for the GY thermophilic-wet AD combination. Methane yield was enhanced by 240 % compared to the GY waste mesophilic-dry condition. The results confirm that high temperature regimes and small solids concentrations are conditions that enhance methane yield from GY waste. A similar trend was observed for the anaerobic digestion of FV waste. Furthermore, a maximum value of VS (53 %) and sCOD (84 %) reduction was achieved during the AD of GY waste under the thermophilic-wet condition.Keywords: anaerobic digestion, thermophilic, mesophilic, total solids concentration
Procedia PDF Downloads 141540 Virtual Reality in COVID-19 Stroke Rehabilitation: Preliminary Outcomes
Authors: Kasra Afsahi, Maryam Soheilifar, S. Hossein Hosseini
Abstract:
Background: There is growing evidence that Cerebral Vascular Accident (CVA) can be a consequence of Covid-19 infection. Understanding novel treatment approaches are important in optimizing patient outcomes. Case: This case explores the use of Virtual Reality (VR) in the treatment of a 23-year-old COVID-positive female presenting with left hemiparesis in August 2020. Imaging showed right globus pallidus, thalamus, and internal capsule ischemic stroke. Conventional rehabilitation was started two weeks later, with virtual reality (VR) included. This game-based virtual reality (VR) technology developed for stroke patients was based on upper extremity exercises and functions for stroke. Physical examination showed left hemiparesis with muscle strength 3/5 in the upper extremity and 4/5 in the lower extremity. The range of motion of the shoulder was 90-100 degrees. The speech exam showed a mild decrease in fluency. Mild lower lip dynamic asymmetry was seen. Babinski was positive on the left. Gait speed was decreased (75 steps per minute). Intervention: Our game-based VR system was developed based on upper extremity physiotherapy exercises for post-stroke patients to increase the active, voluntary movement of the upper extremity joints and improve the function. The conventional program was initiated with active exercises, shoulder sanding for joint ROMs, walking shoulder, shoulder wheel, and combination movements of the shoulder, elbow, and wrist joints, alternative flexion-extension, pronation-supination movements, Pegboard and Purdo pegboard exercises. Also, fine movements included smart gloves, biofeedback, finger ladder, and writing. The difficulty of the game increased at each stage of the practice with progress in patient performances. Outcome: After 6 weeks of treatment, gait and speech were normal and upper extremity strength was improved to near normal status. No adverse effects were noted. Conclusion: This case suggests that VR is a useful tool in the treatment of a patient with covid-19 related CVA. The safety of newly developed instruments for such cases provides new approaches to improve the therapeutic outcomes and prognosis as well as increased satisfaction rate among patients.Keywords: covid-19, stroke, virtual reality, rehabilitation
Procedia PDF Downloads 141539 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging
Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen
Abstract:
Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques
Procedia PDF Downloads 99538 A Method to Predict the Thermo-Elastic Behavior of Laser-Integrated Machine Tools
Authors: C. Brecher, M. Fey, F. Du Bois-Reymond, S. Neus
Abstract:
Additive manufacturing has emerged into a fast-growing section within the manufacturing technologies. Established machine tool manufacturers, such as DMG MORI, recently presented machine tools combining milling and laser welding. By this, machine tools can realize a higher degree of flexibility and a shorter production time. Still there are challenges that have to be accounted for in terms of maintaining the necessary machining accuracy - especially due to thermal effects arising through the use of high power laser processing units. To study the thermal behavior of laser-integrated machine tools, it is essential to analyze and simulate the thermal behavior of machine components, individual and assembled. This information will help to design a geometrically stable machine tool under the influence of high power laser processes. This paper presents an approach to decrease the loss of machining precision due to thermal impacts. Real effects of laser machining processes are considered and thus enable an optimized design of the machine tool, respective its components, in the early design phase. Core element of this approach is a matched FEM model considering all relevant variables arising, e.g. laser power, angle of laser beam, reflective coefficients and heat transfer coefficient. Hence, a systematic approach to obtain this matched FEM model is essential. Indicating the thermal behavior of structural components as well as predicting the laser beam path, to determine the relevant beam intensity on the structural components, there are the two constituent aspects of the method. To match the model both aspects of the method have to be combined and verified empirically. In this context, an essential machine component of a five axis machine tool, the turn-swivel table, serves as the demonstration object for the verification process. Therefore, a turn-swivel table test bench as well as an experimental set-up to measure the beam propagation were developed and are described in the paper. In addition to the empirical investigation, a simulative approach of the described types of experimental examination is presented. Concluding, it is shown that the method and a good understanding of the two core aspects, the thermo-elastic machine behavior and the laser beam path, as well as their combination helps designers to minimize the loss of precision in the early stages of the design phase.Keywords: additive manufacturing, laser beam machining, machine tool, thermal effects
Procedia PDF Downloads 265537 Sphere in Cube Grid Approach to Modelling of Shale Gas Production Using Non-Linear Flow Mechanisms
Authors: Dhruvit S. Berawala, Jann R. Ursin, Obrad Slijepcevic
Abstract:
Shale gas is one of the most rapidly growing forms of natural gas. Unconventional natural gas deposits are difficult to characterize overall, but in general are often lower in resource concentration and dispersed over large areas. Moreover, gas is densely packed into the matrix through adsorption which accounts for large volume of gas reserves. Gas production from tight shale deposits are made possible by extensive and deep well fracturing which contacts large fractions of the formation. The conventional reservoir modelling and production forecasting methods, which rely on fluid-flow processes dominated by viscous forces, have proved to be very pessimistic and inaccurate. This paper presents a new approach to forecast shale gas production by detailed modeling of gas desorption, diffusion and non-linear flow mechanisms in combination with statistical representation of these processes. The representation of the model involves a cube as a porous media where free gas is present and a sphere (SiC: Sphere in Cube model) inside it where gas is adsorbed on to the kerogen or organic matter. Further, the sphere is considered consisting of many layers of adsorbed gas in an onion-like structure. With pressure decline, the gas desorbs first from the outer most layer of sphere causing decrease in its molecular concentration. The new available surface area and change in concentration triggers the diffusion of gas from kerogen. The process continues until all the gas present internally diffuses out of the kerogen, gets adsorbs onto available surface area and then desorbs into the nanopores and micro-fractures in the cube. Each SiC idealizes a gas pathway and is characterized by sphere diameter and length of the cube. The diameter allows to model gas storage, diffusion and desorption; the cube length takes into account the pathway for flow in nanopores and micro-fractures. Many of these representative but general cells of the reservoir are put together and linked to a well or hydraulic fracture. The paper quantitatively describes these processes as well as clarifies the geological conditions under which a successful shale gas production could be expected. A numerical model has been derived which is then compiled on FORTRAN to develop a simulator for the production of shale gas by considering the spheres as a source term in each of the grid blocks. By applying SiC to field data, we demonstrate that the model provides an effective way to quickly access gas production rates from shale formations. We also examine the effect of model input properties on gas production.Keywords: adsorption, diffusion, non-linear flow, shale gas production
Procedia PDF Downloads 166536 Flexural Properties of Typha Fibers Reinforced Polyester Composite
Authors: Sana Rezig, Yosr Ben Mlik, Mounir Jaouadi, Foued Khoffi, Slah Msahli, Bernard Durand
Abstract:
Increasing interest in environmental concerns, natural fibers are once again being considered as reinforcements for polymer composites. The main objective of this study is to explore another natural resource, Typha fiber; which is renewable without production cost and available abundantly in nature. The aim of this study was to study the flexural properties of composite resin with and without reinforcing Typha leaf and stem fibers. The specimens were made by the hand-lay-up process using polyester matrix. In our work, we focused on the effect of various treatment conditions (sea water, alkali treatment and a combination of the two treatments), as a surface modifier, on the flexural properties of the Typha fibers reinforced polyester composites. Moreover, weight ratio of Typha leaf or stem fibers was investigated. Besides, both fibers from leaf and stem of Typha plant were used to evaluate the reinforcing effect. Another parameter, which is reinforcement structure, was investigated. In fact, a first composite was made with air-laid nonwoven structure of fibers. A second composite was with a mixture of fibers and resin for each kind of treatment. Results show that alkali treatment and combined process provided better mechanical properties of composites in comparison with fiber treated by sea water. The fiber weight ratio influenced the flexural properties of composites. Indeed, a maximum value of flexural strength of 69.8 and 62,32 MPa with flexural modulus of 6.16 and 6.34 GPawas observed respectively for composite reinforced with leaf and stem fibers for 12.6 % fiber weight ratio. For the different treatments carried out, the treatment using caustic soda, whether alone or after retting seawater, show the best results because it improves adhesion between the polyester matrix and the fibers of reinforcement. SEM photographs were made to ascertain the effects of the surface treatment of the fibers. By varying the structure of the fibers of Typha, the reinforcement used in bulk shows more effective results as that used in the non-woven structure. In addition, flexural strength rises with about (65.32 %) in the case of composite reinforced with a mixture of 12.6% leaf fibers and (27.45 %) in the case of a composite reinforced with a nonwoven structure of 12.6 % of leaf fibers. Thus, to better evaluate the effect of the fiber origin, the reinforcing structure, the processing performed and the reinforcement factor on the performance of composite materials, a statistical study was performed using Minitab. Thus, ANOVA was used, and the patterns of the main effects of these parameters and interaction between them were established. Statistical analysis, the fiber treatment and reinforcement structure seem to be the most significant parameters.Keywords: flexural properties, fiber treatment, structure and weight ratio, SEM photographs, Typha leaf and stem fibers
Procedia PDF Downloads 415535 Time-Interval between Rectal Cancer Surgery and Reintervention for Anastomotic Leakage and the Effects of a Defunctioning Stoma: A Dutch Population-Based Study
Authors: Anne-Loes K. Warps, Rob A. E. M. Tollenaar, Pieter J. Tanis, Jan Willem T. Dekker
Abstract:
Anastomotic leakage after colorectal cancer surgery remains a severe complication. Early diagnosis and treatment are essential to prevent further adverse outcomes. In the literature, it has been suggested that earlier reintervention is associated with better survival, but anastomotic leakage can occur with a highly variable time interval to index surgery. This study aims to evaluate the time-interval between rectal cancer resection with primary anastomosis creation and reoperation, in relation to short-term outcomes, stratified for the use of a defunctioning stoma. Methods: Data of all primary rectal cancer patients that underwent elective resection with primary anastomosis during 2013-2019 were extracted from the Dutch ColoRectal Audit. Analyses were stratified for defunctioning stoma. Anastomotic leakage was defined as a defect of the intestinal wall or abscess at the site of the colorectal anastomosis for which a reintervention was required within 30 days. Primary outcomes were new stoma construction, mortality, ICU admission, prolonged hospital stay and readmission. The association between time to reoperation and outcome was evaluated in three ways: Per 2 days, before versus on or after postoperative day 5 and during primary versus readmission. Results: In total 10,772 rectal cancer patients underwent resection with primary anastomosis. A defunctioning stoma was made in 46.6% of patients. These patients had a lower anastomotic leakage rate (8.2% vs. 11.6%, p < 0.001) and less often underwent a reoperation (45.3% vs. 88.7%, p < 0.001). Early reoperations (< 5 days) had the highest complication and mortality rate. Thereafter the distribution of adverse outcomes was more spread over the 30-day postoperative period for patients with a defunctioning stoma. Median time-interval from primary resection to reoperation for defunctioning stoma patients was 7 days (IQR 4-14) versus 5 days (IQR 3-13 days) for no-defunctioning stoma patients. The mortality rate after primary resection and reoperation were comparable (resp. for defunctioning vs. no-defunctioning stoma 1.0% vs. 0.7%, P=0.106 and 5.0% vs. 2.3%, P=0.107). Conclusion: This study demonstrated that early reinterventions after anastomotic leakage are associated with worse outcomes (i.e. mortality). Maybe the combination of a physiological dip in the cellular immune response and release of cytokines following surgery, as well as a release of endotoxins caused by the bacteremia originating from the leakage, leads to a more profound sepsis. Another explanation might be that early leaks are not contained to the pelvis, leading to a more profound sepsis requiring early reoperations. Leakage with or without defunctioning stoma resulted in a different type of reinterventions and time-interval between surgery and reoperation.Keywords: rectal cancer surgery, defunctioning stoma, anastomotic leakage, time-interval to reoperation
Procedia PDF Downloads 138534 Detection of Acrylamide Using Liquid Chromatography-Tandem Mass Spectrometry and Quantitative Risk Assessment in Selected Food from Saudi Market
Authors: Sarah A. Alotaibi, Mohammed A. Almutairi, Abdullah A. Alsayari, Adibah M. Almutairi, Somaiah K. Almubayedh
Abstract:
Concerns over the presence of acrylamide in food date back to 2002, when Swedish scientists stated that, in carbohydrate-rich foods, amounts of acrylamide were formed when cooked at high temperatures. Similar findings were reported by other researchers which, consequently, caused major international efforts to investigate dietary exposure and the subsequent health complications in order to properly manage this issue. Due to this issue, in this work, we aim to determine the acrylamide level in different foods (coffee, potato chips, biscuits, and baby food) commonly consumed by the Saudi population. In a total of forty-three samples, acrylamide was detected in twenty-three samples at levels of 12.3 to 2850 µg/kg. In reference to the food groups, the highest concentration of acrylamide was found in coffee samples (<12.3-2850 μg/kg), followed by potato chips (655-1310 μg/kg), then biscuits (23.5-449 μg/kg), whereas the lowest acrylamide level was observed in baby food (<14.75 – 126 μg/kg). Most coffee, biscuits and potato chips products contain high amount of acrylamide content and also the most commonly consumed product. Saudi adults had a mean exposure of acrylamide for coffee, potato, biscuit, and cereal (0.07439, 0.04794, 0.01125, 0.003371 µg/kg-b.w/day), respectively. On the other hand, exposure to acrylamide in Saudi infants and children to the same types of food was (0.1701, 0.1096, 0.02572, 0.00771 µg/kg-b.w/day), respectively. Most groups have a percentile that exceeds the tolerable daily intake (TDI) cancer value (2.6 µg/kg-b.w/day). Overall, the MOE results show that the Saudi population is at high risk of acrylamide-related disease in all food types, and there is a chance of cancer risk in all age groups (all values ˂10,000). Furthermore, it was found that in non-cancer risks, the acrylamide in all tested foods was within the safe limit (˃125), except for potato chips, in which there is a risk for diseases in the population. With potato and coffee as raw materials, additional studies were conducted to assess different factors, including temperature, cocking time, and additives affecting the acrylamide formation in fried potato and roasted coffee, by systematically varying processing temperatures and time values, a mitigation of acrylamide content was achieved when lowering the temperature and decreasing the cooking time. Furthermore, it was shown that the combination of the addition of chitosan and NaCl had a large impact on the formation.Keywords: risk assessment, dietary exposure, MOA, acrylamide, hazard
Procedia PDF Downloads 58533 Kinematic Modelling and Task-Based Synthesis of a Passive Architecture for an Upper Limb Rehabilitation Exoskeleton
Authors: Sakshi Gupta, Anupam Agrawal, Ekta Singla
Abstract:
An exoskeleton design for rehabilitation purpose encounters many challenges, including ergonomically acceptable wearing technology, architectural design human-motion compatibility, actuation type, human-robot interaction, etc. In this paper, a passive architecture for upper limb exoskeleton is proposed for assisting in rehabilitation tasks. Kinematic modelling is detailed for task-based kinematic synthesis of the wearable exoskeleton for self-feeding tasks. The exoskeleton architecture possesses expansion and torsional springs which are able to store and redistribute energy over the human arm joints. The elastic characteristics of the springs have been optimized to minimize the mechanical work of the human arm joints. The concept of hybrid combination of a 4-bar parallelogram linkage and a serial linkage were chosen, where the 4-bar parallelogram linkage with expansion spring acts as a rigid structure which is used to provide the rotational degree-of-freedom (DOF) required for lowering and raising of the arm. The single linkage with torsional spring allows for the rotational DOF required for elbow movement. The focus of the paper is kinematic modelling, analysis and task-based synthesis framework for the proposed architecture, keeping in considerations the essential tasks of self-feeding and self-exercising during rehabilitation of partially healthy person. Rehabilitation of primary functional movements (activities of daily life, i.e., ADL) is routine activities that people tend to every day such as cleaning, dressing, feeding. We are focusing on the feeding process to make people independent in respect of the feeding tasks. The tasks are focused to post-surgery patients under rehabilitation with less than 40% weakness. The challenges addressed in work are ensuring to emulate the natural movement of the human arm. Human motion data is extracted through motion-sensors for targeted tasks of feeding and specific exercises. Task-based synthesis procedure framework will be discussed for the proposed architecture. The results include the simulation of the architectural concept for tracking the human-arm movements while displaying the kinematic and static study parameters for standard human weight. D-H parameters are used for kinematic modelling of the hybrid-mechanism, and the model is used while performing task-based optimal synthesis utilizing evolutionary algorithm.Keywords: passive mechanism, task-based synthesis, emulating human-motion, exoskeleton
Procedia PDF Downloads 137532 Targeting Methionine Metabolism In Gastric Cancer; Promising To Improve Chemosensetivity With Non-hetrogeneity
Authors: Nigatu Tadesse, Li Juan, Liuhong Ming
Abstract:
Gastric cancer (GC) is the fifth most common and fourth deadly cancer in the world with limited treatment options at late advanced stage in which surgical therapy is not recommended with chemotherapy remain as the mainstay of treatment. However, the occurrence of chemoresistance as well as intera-tumoral and inter-tumoral heterogeneity of response to targeted and immunotherapy underlined a clear unmet treatment need in gastroenterology. Several molecular and cellular alterations ascribed for chemo resistance in GC including cancer stem cells (CSC) and tumor microenvironment (TME) remodeling. Cancer cells including CSC bears higher metabolic demand and major changes in TME involves alterations of gut microbiota interacting with nutrients metabolism. Metabolic upregulation in lipids, carbohydrates, amino acids, fatty acids biosynthesis pathways identified as a common hall mark in GC. Metabolic addiction to methionine metabolism occurs in many cancer cells to promote the biosynthesis of S-Adenosylmethionine (SAM), a universal methyl donor molecule for high rate of transmethylation in GC and promote cell proliferation. Targeting methionine metabolism found to promotes chemo-sensitivity with treatment non-heterogeneity. Methionine restriction (MR) promoted the arrest of cell cycle at S/G2 phase and enhanced downregulation of GC cells resistance to apoptosis (including ferroptosis), which suggests the potential of synergy with chemotherapies acting at S-phase of the cell cycle as well as inducing cell apoptosis. Accumulated evidences showed both the biogenesis as well as intracellular metabolism of exogenous methionine could be safe and effective target for therapy either alone or in combination with chemotherapies. This review article provides an over view of the upregulation in methionine biosynthesis pathway and the molecular signaling through the PI3K/Akt/mTOR-c-MYC axis to promote metabolic reprograming through activating the expression of L-type aminoacid-1 (LAT1) transporter and overexpression of Methionine adenosyltransferase 2A(MAT2A) for intercellular metabolic conversion of exogenous methionine to SAM in GC, and the potential of targeting with novel therapeutic agents such as methioninase (METase), Methionine adenosyltransferase 2A (MAT2A), c-MYC, methyl like transferase 16 (METTL16) inhibitors that are currently under clinical trial development stages and future perspectives.Keywords: gastric cancer, methionine metabolism, pi3k/akt/mtorc1-c-myc axis, gut microbiota, MAT2A, c-MYC, METTL16, methioninase
Procedia PDF Downloads 48531 A Robust Optimization of Chassis Durability/Comfort Compromise Using Chebyshev Polynomial Chaos Expansion Method
Authors: Hanwei Gao, Louis Jezequel, Eric Cabrol, Bernard Vitry
Abstract:
The chassis system is composed of complex elements that take up all the loads from the tire-ground contact area and thus it plays an important role in numerous specifications such as durability, comfort, crash, etc. During the development of new vehicle projects in Renault, durability validation is always the main focus while deployment of comfort comes later in the project. Therefore, sometimes design choices have to be reconsidered because of the natural incompatibility between these two specifications. Besides, robustness is also an important point of concern as it is related to manufacturing costs as well as the performance after the ageing of components like shock absorbers. In this paper an approach is proposed aiming to realize a multi-objective optimization between chassis endurance and comfort while taking the random factors into consideration. The adaptive-sparse polynomial chaos expansion method (PCE) with Chebyshev polynomial series has been applied to predict responses’ uncertainty intervals of a system according to its uncertain-but-bounded parameters. The approach can be divided into three steps. First an initial design of experiments is realized to build the response surfaces which represent statistically a black-box system. Secondly within several iterations an optimum set is proposed and validated which will form a Pareto front. At the same time the robustness of each response, served as additional objectives, is calculated from the pre-defined parameter intervals and the response surfaces obtained in the first step. Finally an inverse strategy is carried out to determine the parameters’ tolerance combination with a maximally acceptable degradation of the responses in terms of manufacturing costs. A quarter car model has been tested as an example by applying the road excitations from the actual road measurements for both endurance and comfort calculations. One indicator based on the Basquin’s law is defined to compare the global chassis durability of different parameter settings. Another indicator related to comfort is obtained from the vertical acceleration of the sprung mass. An optimum set with best robustness has been finally obtained and the reference tests prove a good robustness prediction of Chebyshev PCE method. This example demonstrates the effectiveness and reliability of the approach, in particular its ability to save computational costs for a complex system.Keywords: chassis durability, Chebyshev polynomials, multi-objective optimization, polynomial chaos expansion, ride comfort, robust design
Procedia PDF Downloads 152530 Comparing Xbar Charts: Conventional versus Reweighted Robust Estimation Methods for Univariate Data Sets
Authors: Ece Cigdem Mutlu, Burak Alakent
Abstract:
Maintaining the quality of manufactured products at a desired level depends on the stability of process dispersion and location parameters and detection of perturbations in these parameters as promptly as possible. Shewhart control chart is the most widely used technique in statistical process monitoring to monitor the quality of products and control process mean and variability. In the application of Xbar control charts, sample standard deviation and sample mean are known to be the most efficient conventional estimators in determining process dispersion and location parameters, respectively, based on the assumption of independent and normally distributed datasets. On the other hand, there is no guarantee that the real-world data would be normally distributed. In the cases of estimated process parameters from Phase I data clouded with outliers, efficiency of traditional estimators is significantly reduced, and performance of Xbar charts are undesirably low, e.g. occasional outliers in the rational subgroups in Phase I data set may considerably affect the sample mean and standard deviation, resulting a serious delay in detection of inferior products in Phase II. For more efficient application of control charts, it is required to use robust estimators against contaminations, which may exist in Phase I. In the current study, we present a simple approach to construct robust Xbar control charts using average distance to the median, Qn-estimator of scale, M-estimator of scale with logistic psi-function in the estimation of process dispersion parameter, and Harrell-Davis qth quantile estimator, Hodge-Lehmann estimator and M-estimator of location with Huber psi-function and logistic psi-function in the estimation of process location parameter. Phase I efficiency of proposed estimators and Phase II performance of Xbar charts constructed from these estimators are compared with the conventional mean and standard deviation statistics both under normality and against diffuse-localized and symmetric-asymmetric contaminations using 50,000 Monte Carlo simulations on MATLAB. Consequently, it is found that robust estimators yield parameter estimates with higher efficiency against all types of contaminations, and Xbar charts constructed using robust estimators have higher power in detecting disturbances, compared to conventional methods. Additionally, utilizing individuals charts to screen outlier subgroups and employing different combination of dispersion and location estimators on subgroups and individual observations are found to improve the performance of Xbar charts.Keywords: average run length, M-estimators, quality control, robust estimators
Procedia PDF Downloads 190529 The Effects of Lighting Environments on the Perception and Psychology of Consumers of Different Genders in a 3C Retail Store
Authors: Yu-Fong Lin
Abstract:
The main purpose of this study is to explore the impact of different lighting arrangements that create different visual environments in a 3C retail store on the perception, psychology, and shopping tendencies of consumers of different genders. In recent years, the ‘emotional shopping’ model has been widely accepted in the consumer market; in addition to the emotional meaning and value of a product, the in-store ‘shopping atmosphere’ has also been increasingly regarded as significant. The lighting serves as an important environmental stimulus that influences the atmosphere of a store. Altering the lighting can change the color, the shape, and the atmosphere of a space. A successful retail lighting design can not only attract consumers’ attention and generate their interest in various goods, but it can also affect consumers’ shopping approach, behavior, and desires. 3C electronic products have become mainstream in the current consumer market. Consumers of different genders may demonstrate different behaviors and preferences within a 3C store environment. This study tests the impact of a combination of lighting contrasts and color temperatures in a 3C retail store on the visual perception and psychological reactions of consumers of different genders. The research design employs an experimental method to collect data from subjects and then uses statistical analysis adhering to a 2 x 2 x 2 factorial design to identify the influences of different lighting environments. This study utilizes virtual reality technology as the primary method by which to create four virtual store lighting environments. The four lighting conditions are as follows: high contrast/cool tone, high contrast/warm tone, low contrast/cool tone, and low contrast/warm tone. Differences in the virtual lighting and the environment are used to test subjects’ visual perceptions, emotional reactions, store satisfaction, approach-avoidance intentions, and spatial atmosphere preferences. The findings of our preliminary test indicate that female subjects have a higher pleasure response than male subjects in a 3C retail store. Based on the findings of our preliminary test, the researchers modified the contents of the questionnaires and the virtual 3C retail environment with different lighting conditions in order to conduct the final experiment. The results will provide information about the effects of retail lighting on the environmental psychology and the psychological reactions of consumers of different genders in a 3C retail store lighting environment. These results will enable useful practical guidelines about creating 3C retail store lighting and atmosphere for retailers and interior designers to be established.Keywords: 3C retail store, environmental stimuli, lighting, virtual reality
Procedia PDF Downloads 390528 Finite Element Analysis of Layered Composite Plate with Elastic Pin Under Uniaxial Load Using ANSYS
Authors: R. M. Shabbir Ahmed, Mohamed Haneef, A. R. Anwar Khan
Abstract:
Analysis of stresses plays important role in the optimization of structures. Prior stress estimation helps in better design of the products. Composites find wide usage in the industrial and home applications due to its strength to weight ratio. Especially in the air craft industry, the usage of composites is more due to its advantages over the conventional materials. Composites are mainly made of orthotropic materials having unequal strength in the different directions. Composite materials have the drawback of delamination and debonding due to the weaker bond materials compared to the parent materials. So proper analysis should be done to the composite joints before using it in the practical conditions. In the present work, a composite plate with elastic pin is considered for analysis using finite element software Ansys. Basically the geometry is built using Ansys software using top down approach with different Boolean operations. The modelled object is meshed with three dimensional layered element solid46 for composite plate and solid element (Solid45) for pin material. Various combinations are considered to find the strength of the composite joint under uniaxial loading conditions. Due to symmetry of the problem, only quarter geometry is built and results are presented for full model using Ansys expansion options. The results show effect of pin diameter on the joint strength. Here the deflection and load sharing of the pin are increasing and other parameters like overall stress, pin stress and contact pressure are reducing due to lesser load on the plate material. Further material effect shows, higher young modulus material has little deflection, but other parameters are increasing. Interference analysis shows increasing of overall stress, pin stress, contact stress along with pin bearing load. This increase should be understood properly for increasing the load carrying capacity of the joint. Generally every structure is preloaded to increase the compressive stress in the joint to increase the load carrying capacity. But the stress increase should be properly analysed for composite due to its delamination and debonding effects due to failure of the bond materials. When results for an isotropic combination is compared with composite joint, isotropic joint shows uniformity of the results with lesser values for all parameters. This is mainly due to applied layer angle combinations. All the results are represented with necessasary pictorial plots.Keywords: bearing force, frictional force, finite element analysis, ANSYS
Procedia PDF Downloads 334527 Poly(ε-caprolactone)/Halloysite Nanotube Nanocomposites Scaffolds for Tissue Engineering
Authors: Z. Terzopoulou, I. Koliakou, D. Bikiaris
Abstract:
Tissue engineering offers a new approach to regenerate diseased or damaged tissues such as bone. Great effort is devoted to eliminating the need of removing non-degradable implants at the end of their life span, with biodegradable polymers playing a major part. Poly(ε-caprolactone) (PCL) is one of the best candidates for this purpose due to its high permeability, good biodegradability and exceptional biocompatibility, which has stimulated extensive research into its potential application in the biomedical fields. However, PCL degrades much slower than other known biodegradable polymers and has a total degradation of 2-4 years depending on the initial molecular weight of the device. This is due to its relatively hydrophobic character and high crystallinity. Consequently, much attention has been given to the tunable degradation of PCL to meet the diverse requirements of biomedicine. Poly(ε-caprolactone) (PCL) is a biodegradable polyester that lacks bioactivity, so when used in bone tissue engineering, new bone tissue cannot bond tightly on the polymeric surface. Therefore, it is important to incorporate reinforcing fillers into PCL matrix in order to result in a promising combination of bioactivity, biodegradability, and strength. Natural clay halloysite nanotubes (HNTs) were incorporated into PCL polymeric matrix, via in situ ring-opening polymerization of caprolactone, in concentrations 0.5, 1 and 2.5 wt%. Both unmodified and modified with aminopropyltrimethoxysilane (APTES) HNTs were used in this study. The effect of nanofiller concentration and functionalization with end-amino groups on the physicochemical properties of the prepared nanocomposites was studied. Mechanical properties were found enhanced after the incorporation of nanofillers, while the modification increased further the values of tensile and impact strength. Thermal stability of PCL was not affected by the presence of nanofillers, while the crystallization rate that was studied by Differential Scanning Calorimetry (DSC) and Polarized Light Optical Microscopy (POM) increased. All materials were subjected to enzymatic hydrolysis in phosphate buffer in the presence of lipases. Due to the hydrophilic nature of HNTs, the biodegradation rate of nanocomposites was higher compared to neat PCL. In order to confirm the effect of hydrophilicity, contact angle measurements were also performed. In vitro biomineralization test confirmed that all samples were bioactive as mineral deposits were detected by X-ray diffractometry after incubation in SBF. All scaffolds were tested in relevant cell culture using osteoblast-like cells (MG-63) to demonstrate their biocompatibilityKeywords: biomaterials, nanocomposites, scaffolds, tissue engineering
Procedia PDF Downloads 316526 Self-Assembling Layered Double Hydroxide Nanosheets on β-FeOOH Nanorods for Reducing Fire Hazards of Epoxy Resin
Abstract:
Epoxy resins (EP), one of the most important thermosetting polymers, is widely applied in various fields due to its desirable properties, such as excellent electrical insulation, low shrinkage, outstanding mechanical stiffness, satisfactory adhesion and solvent resistance. However, like most of the polymeric materials, EP has the fatal drawbacks including inherent flammability and high yield of toxic smoke, which restricts its application in the fields requiring fire safety. So, it is still a challenge and an interesting subject to develop new flame retardants which can not only remarkably improve the flame retardancy, but also render modified resins low toxic gases generation. In recent work, polymer nanocomposites based on nanohybrids that contain two or more kinds of nanofillers have drawn intensive interest, which can realize performance enhancements. The realization of previous hybrids of carbon nanotubes (CNTs) and molybdenum disulfide provides us a novel route to decorate layered double hydroxide (LDH) nanosheets on the surface of β-FeOOH nanorods; the deposited LDH nanosheets can fill the network and promote the work efficiency of β-FeOOH nanorods. Moreover, the synergistic effects between LDH and β-FeOOH can be anticipated to have potential applications in reducing fire hazards of EP composites for the combination of condense-phase and gas-phase mechanism. As reported, β-FeOOH nanorods can act as a core to prepare hybrid nanostructures combining with other nanoparticles through electrostatic attraction through layer-by-layer assembly technique. In this work, LDH nanosheets wrapped β-FeOOH nanorods (LDH-β-FeOOH) hybrids was synthesized by a facile method, with the purpose of combining the characteristics of one dimension (1D) and two dimension (2D), to improve the fire resistance of epoxy resin. The hybrids showed a well dispersion in EP matrix and had no obvious aggregation. Thermogravimetric analysis and cone calorimeter tests confirmed that LDH-β-FeOOH hybrids into EP matrix with a loading of 3% could obviously improve the fire safety of EP composites. The plausible flame retardancy mechanism was explored by thermogravimetric infrared (TG-IR) and X-ray photoelectron spectroscopy. The reasons were concluded: condense-phase and gas-phase. Nanofillers were transferred to the surface of matrix during combustion, which could not only shield EP matrix from external radiation and heat feedback from the fire zone, but also efficiently retard transport of oxygen and flammable pyrolysis.Keywords: fire hazards, toxic gases, self-assembly, epoxy
Procedia PDF Downloads 174525 An Investigation into Enablers and Barriers of Reverse Technology Transfer
Authors: Nirmal Kundu, Chandan Bhar, Visveswaran Pandurangan
Abstract:
Technology is the most valued possession for a country or an organization. The economic development depends not on stock of technology but on the capabilities how the technology is being exploited. The technology transfer is the best way how the developing countries have an access to state-of- the-art technology. Traditional technology transfer is a unidirectional phenomenon where technology is transferred from developed to developing countries. But now there is a change of wind. There is a general agreement that global shift of economic power is under way from west to east. As China and India are making the transition from users to producers, and producers to innovators, this has increasing important implications on economy, technology and policy of global trade. As a result, Reverse technology transfer has become a phenomenon and field of study in technology management. The term “Reverse Technology Transfer” is not well defined. Initially the concept of Reverse technology transfer was associated with the phenomenon of “Brain drain” from developing to developed countries. In the second phase, Reverse Technology Transfer was associated with the transfer of knowledge and technology from subsidiaries to multinationals. Finally, time has come now to extend the concept of reverse technology transfer to two different organizations or countries related or unrelated by traditional technology transfer but the transfer or has essentially received the technology through traditional mode of technology transfer. The objective of this paper is to study; 1) the present status of Reverse technology transfer, 2) the factors which are the enablers and barriers of Reverse technology transfer and 3) how the reverse technology transfer strategy can be integrated in the technology policy of a country which will give the countries an economic boost. The research methodology used in this study is a combination of literature review, case studies and key informant interviews. The literature review includes both published as well as unpublished sources of literature. In case study, attempt has been made to study the records of reverse technology transfer that have been occurred in developing countries. In case of key informant interviews, informal telephonic discussions have been carried out with the key executives of the organizations (industry, university and research institutions) who are actively engaged in the process of technology transfer- traditional as well as reverse. Reverse technology transfer is possible only by creating technological capabilities. Following four important enablers coupled with government active and aggressive action can help to build technology base to reach to the goal of Reverse technology transfer 1) Imitation to innovation, 2) Reverse engineering, 3) Collaborative R & D approach, and 4) Preventing reverse brain drain. The barriers that come in the way are the mindset of over dependence, over subordination and parent–child attitude (not adult attitude). Exploitation of these enablers and overcoming the barriers of reverse technology transfer, the developing countries like India and China can prove that going “reverse” is the best way to move forward and again establish themselves as leader of the future world.Keywords: barriers of reverse technology transfer, enablers of reverse technology transfer, knowledge transfer, reverse technology transfer, technology transfer
Procedia PDF Downloads 399524 Building the Professional Readiness of Graduates from Day One: An Empirical Approach to Curriculum Continuous Improvement
Authors: Fiona Wahr, Sitalakshmi Venkatraman
Abstract:
Industry employers require new graduates to bring with them a range of knowledge, skills and abilities which mean these new employees can immediately make valuable work contributions. These will be a combination of discipline and professional knowledge, skills and abilities which give graduates the technical capabilities to solve practical problems whilst interacting with a range of stakeholders. Underpinning the development of these disciplines and professional knowledge, skills and abilities, are “enabling” knowledge, skills and abilities which assist students to engage in learning. These are academic and learning skills which are essential to common starting points for both the learning process of students entering the course as well as forming the foundation for the fully developed graduate knowledge, skills and abilities. This paper reports on a project created to introduce and strengthen these enabling skills into the first semester of a Bachelor of Information Technology degree in an Australian polytechnic. The project uses an action research approach in the context of ongoing continuous improvement for the course to enhance the overall learning experience, learning sequencing, graduate outcomes, and most importantly, in the first semester, student engagement and retention. The focus of this is implementing the new curriculum in first semester subjects of the course with the aim of developing the “enabling” learning skills, such as literacy, research and numeracy based knowledge, skills and abilities (KSAs). The approach used for the introduction and embedding of these KSAs, (as both enablers of learning and to underpin graduate attribute development), is presented. Building on previous publications which reported different aspects of this longitudinal study, this paper recaps on the rationale for the curriculum redevelopment and then presents the quantitative findings of entering students’ reading literacy and numeracy knowledge and skills degree as well as their perceived research ability. The paper presents the methodology and findings for this stage of the research. Overall, the cohort exhibits mixed KSA levels in these areas, with a relatively low aggregated score. In addition, the paper describes the considerations for adjusting the design and delivery of the new subjects with a targeted learning experience, in response to the feedback gained through continuous monitoring. Such a strategy is aimed at accommodating the changing learning needs of the students and serves to support them towards achieving the enabling learning goals starting from day one of their higher education studies.Keywords: enabling skills, student retention, embedded learning support, continuous improvement
Procedia PDF Downloads 248