Search results for: PDSOI H-gate Device model Body contact.
81 A Risk Assessment Tool for the Contamination of Aflatoxins on Dried Figs based on Machine Learning Algorithms
Authors: Kottaridi Klimentia, Demopoulos Vasilis, Sidiropoulos Anastasios, Ihara Diego, Nikolaidis Vasileios, Antonopoulos Dimitrios
Abstract:
Aflatoxins are highly poisonous and carcinogenic compounds produced by species of the genus Aspergillus spp. that can infect a variety of agricultural foods, including dried figs. Biological and environmental factors, such as population, pathogenicity and aflatoxinogenic capacity of the strains, topography, soil and climate parameters of the fig orchards are believed to have a strong effect on aflatoxin levels. Existing methods for aflatoxin detection and measurement, such as high-performance liquid chromatography (HPLC), and enzyme-linked immunosorbent assay (ELISA), can provide accurate results, but the procedures are usually time-consuming, sample-destructive and expensive. Predicting aflatoxin levels prior to crop harvest is useful for minimizing the health and financial impact of a contaminated crop. Consequently, there is interest in developing a tool that predicts aflatoxin levels based on topography and soil analysis data of fig orchards. This paper describes the development of a risk assessment tool for the contamination of aflatoxin on dried figs, based on the location and altitude of the fig orchards, the population of the fungus Aspergillus spp. in the soil, and soil parameters such as pH, saturation percentage (SP), electrical conductivity (EC), organic matter, particle size analysis (sand, silt, clay), concentration of the exchangeable cations (Ca, Mg, K, Na), extractable P and trace of elements (B, Fe, Mn, Zn and Cu), by employing machine learning methods. In particular, our proposed method integrates three machine learning techniques i.e., dimensionality reduction on the original dataset (Principal Component Analysis), metric learning (Mahalanobis Metric for Clustering) and K-nearest Neighbors learning algorithm (KNN), into an enhanced model, with mean performance equal to 85% by terms of the Pearson Correlation Coefficient (PCC) between observed and predicted values.
Keywords: aflatoxins, Aspergillus spp., dried figs, k-nearest neighbors, machine learning, prediction
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 64880 The Study of Cost Accounting in S Company Based On TDABC
Authors: Heng Ma
Abstract:
Third-party warehousing logistics has an important role in the development of external logistics. At present, the third-party logistics in our country is still a new industry, the accounting system has not yet been established, the current financial accounting system of third-party warehousing logistics is mainly in the traditional way of thinking, and only able to provide the total cost information of the entire enterprise during the accounting period, unable to reflect operating indirect cost information. In order to solve the problem of third-party logistics industry cost information distortion, improve the level of logistics cost management, the paper combines theoretical research and case analysis method to reflect cost allocation by building third-party logistics costing model using Time-Driven Activity-Based Costing(TDABC), and takes S company as an example to account and control the warehousing logistics cost.Based on the idea of “Products consume activities and activities consume resources”, TDABC put time into the main cost driver and use time-consuming equation resources assigned to cost objects. In S company, the objects focuses on three warehouse, engaged with warehousing and transportation (the second warehouse, transport point) service. These three warehouse respectively including five departments, Business Unit, Production Unit, Settlement Center, Security Department and Equipment Division, the activities in these departments are classified by in-out of storage forecast, in-out of storage or transit and safekeeping work. By computing capacity cost rate, building the time-consuming equation, the paper calculates the final operation cost so as to reveal the real cost.The numerical analysis results show that the TDABC can accurately reflect the cost allocation of service customers and reveal the spare capacity cost of resource center, verifies the feasibility and validity of TDABC in third-party logistics industry cost accounting. It inspires enterprises focus on customer relationship management and reduces idle cost to strengthen the cost management of third-party logistics enterprises.
Keywords: Third-party logistics enterprises, TDABC, cost management, S company.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 243579 Is HR in a State of Transition? An International Comparative Study on the Development of HR Competencies
Authors: Barbara Covarrubias Venegas, Sabine Groblschegg, Bernhard Klaus, Julia Domnanovich
Abstract:
Research Objectives: The roles and activities of Human Resource Management (HRM) have changed a lot in the past years. Driven by a changing environment and therefore new business requirements, the scope of human resource (HR) activities has widened. The extent to which these activities should focus on strategic issues to support the long term success of a company has been discussed in science for many years. As many economies of Central and Eastern Europe (CEE) experienced a phase of transition after the socialist era and are now recovering from the 2008 global crisis it is needed to examine the current state of HR positioning. Furthermore a trend in HR work developing from rather administrative units to being strategic partners of management can be noticed. This leads to the question of better understanding the underlying competencies which are necessary to support organisations. This topic was addressed by the international study “HR Competencies in international comparison”. The quantitative survey was conducted by the Institute for Human Resources & Organisation of FHWien University of Applied Science of WKW (A) in cooperation with partner universities in the countries Bosnia- Herzegovina, Croatia, Serbia and Slovenia. Methodology: Using the questionnaire developed by Dave Ulrich we tested whether the HR Competency model can be used for Austria, Bosnia and Herzegovina, Croatia, Serbia and Slovenia. After performing confirmatory and exploratory factor analysis for the whole data set containing all five countries we could clearly distinguish between four competencies. In a further step our analysis focused on median and average comparisons between the HR competency dimensions. Conclusion: Our literature review, in alignment with other studies, shows a relatively rapid pace of development of HR Roles and HR Competencies in BCSS in the past decades. Comparing data from BCSS and Austria we still can notice that regards strategic orientation there is a lack in BCSS countries, thus competencies are not as developed as in Austria. This leads us to the tentative conclusion that HR has undergone a rapid change but is still in a State of Transition from being a rather administrative unit to performing the role of a strategic partner.Keywords: Comparative study, HR competencies, HRM, HR Roles.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 216278 Circular Economy Maturity Models: A Systematic Literature Review
Authors: D. Kreutzer, S. Müller-Abdelrazeq, I. Isenhardt
Abstract:
Resource scarcity, energy transition and the planned climate neutrality pose enormous challenges for manufacturing companies. In order to achieve these goals and a holistic sustainable development, the European Union has listed the circular economy as part of the Circular Economy Action Plan. In addition to a reduction in resource consumption, reduced emissions of greenhouse gases and a reduced volume of waste, the principles of the circular economy also offer enormous economic potential for companies, such as the generation of new circular business models. However, many manufacturing companies, especially small and medium-sized enterprises, do not have the necessary capacity to plan their transformation. They need support and strategies on the path to circular transformation because this change affects not only production but also the entire company. Maturity models offer an approach to determine the current status of companies’ transformation processes. In addition, companies can use the models to identify transformation strategies and thus promote the transformation process. While maturity models are established in other areas, e.g., IT or project management, only a few circular economy maturity models can be found in the scientific literature. The aim of this paper is to analyze the identified maturity models of the circular economy through a systematic literature review (SLR) and, besides other aspects, to check their completeness as well as their quality. For this purpose, circular economy maturity models at the company's (micro) level were identified from the literature, compared, and analyzed with regard to their theoretical and methodological structure. A specific focus was placed, on the one hand, on the analysis of the business units considered in the respective models and, on the other hand, on the underlying metrics and indicators in order to determine the individual maturity level of the entire company. The results of the literature review show, for instance, a significant difference in the number and types of indicators as well as their metrics. For example, most models use subjective indicators and very few objective indicators in their surveys. It was also found that there are rarely well-founded thresholds between the levels. Based on the generated results, concrete ideas and proposals for a research agenda in the field of circular economy maturity models are made.
Keywords: Circular economy, maturity model, maturity assessment, systematic literature review.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22277 Current Drainage Attack Correction via Adjusting the Attacking Saw Function Asymmetry
Authors: Yuri Boiko, Iluju Kiringa, Tet Yeap
Abstract:
Current drainage attack suggested previously is further studied in regular settings of closed-loop controlled Brushless DC (BLDC) motor with Kalman filter in the feedback loop. Modeling and simulation experiments are conducted in a MATLAB environment, implementing the closed-loop control model of BLDC motor operation in position sensorless mode under Kalman filter drive. The current increase in the motor windings is caused by the controller (p-controller in our case) affected by false data injection of substitution of the angular velocity estimates with distorted values. Operation of multiplication to distortion coefficient, values of which are taken from the distortion function synchronized in its periodicity with the rotor’s position change. A saw function with a triangular tooth shape is studied herewith for the purpose of carrying out the bias injection with current drainage consequences. The specific focus here is on how the asymmetry of the tooth in the saw function affects the flow of current drainage. The purpose is two-fold: (i) to produce and collect the signature of an asymmetric saw in the attack for further pattern recognition process, and (ii) to determine conditions of improving stealthiness of such attack via regulating asymmetry in saw function used. It is found that modification of the symmetry in the saw tooth affects the periodicity of current drainage modulation. Specifically, the modulation frequency of the drained current for a fully asymmetric tooth shape coincides with the saw function modulation frequency itself. Increasing the symmetry parameter for the triangle tooth shape leads to an increase in the modulation frequency for the drained current. Moreover, such frequency reaches the switching frequency of the motor windings for fully symmetric triangular shapes, thus becoming undetectable and improving the stealthiness of the attack. Therefore, the collected signatures of the attack can serve for attack parameter identification via the pattern recognition route.
Keywords: Bias injection attack, Kalman filter, BLDC motor, control system, closed loop, P-controller, PID-controller, current drainage, saw-function, asymmetry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15576 Developing Optical Sensors with Application of Cancer Detection by Elastic Light Scattering Spectroscopy
Authors: May Fadheel Estephan, Richard Perks
Abstract:
Cancer is a serious health concern that affects millions of people worldwide. Early detection and treatment are essential for improving patient outcomes. However, current methods for cancer detection have limitations, such as low sensitivity and specificity. The aim of this study was to develop an optical sensor for cancer detection using elastic light scattering spectroscopy (ELSS). ELSS is a non-invasive optical technique that can be used to characterize the size and concentration of particles in a solution. An optical probe was fabricated with a 100-μm-diameter core and a 132-μm centre-to-centre separation. The probe was used to measure the ELSS spectra of polystyrene spheres with diameters of 2 μm, 0.8 μm, and 0.413 μm. The spectra were then analysed to determine the size and concentration of the spheres. The results showed that the optical probe was able to differentiate between the three different sizes of polystyrene spheres. The probe was also able to detect the presence of polystyrene spheres in suspension concentrations as low as 0.01%. The results of this study demonstrate the potential of ELSS for cancer detection. ELSS is a non-invasive technique that can be used to characterize the size and concentration of cells in a tissue sample. This information can be used to identify cancer cells and assess the stage of the disease. The data for this study were collected by measuring the ELSS spectra of polystyrene spheres with different diameters. The spectra were collected using a spectrometer and a computer. The ELSS spectra were analysed using a software program to determine the size and concentration of the spheres. The software program used a mathematical algorithm to fit the spectra to a theoretical model. The question addressed by this study was whether ELSS could be used to detect cancer cells. The results of the study showed that ELSS could be used to differentiate between different sizes of cells, suggesting that it could be used to detect cancer cells. The findings of this research show the utility of ELSS in the early identification of cancer. ELSS is a non-invasive method for characterizing the number and size of cells in a tissue sample. To determine cancer cells and determine the disease's stage, this information can be employed. Further research is needed to evaluate the clinical performance of ELSS for cancer detection.
Keywords: Elastic Light Scattering Spectroscopy, Polystyrene spheres in suspension, optical probe, fibre optics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14175 Time-Cost-Quality Trade-off Software by using Simplified Genetic Algorithm for Typical Repetitive Construction Projects
Authors: Refaat H. Abd El Razek, Ahmed M. Diab, Sherif M. Hafez, Remon F. Aziz
Abstract:
Time-Cost Optimization "TCO" is one of the greatest challenges in construction project planning and control, since the optimization of either time or cost, would usually be at the expense of the other. Since there is a hidden trade-off relationship between project and cost, it might be difficult to predict whether the total cost would increase or decrease as a result of the schedule compression. Recently third dimension in trade-off analysis is taken into consideration that is quality of the projects. Few of the existing algorithms are applied in a case of construction project with threedimensional trade-off analysis, Time-Cost-Quality relationships. The objective of this paper is to presents the development of a practical software system; that named Automatic Multi-objective Typical Construction Resource Optimization System "AMTCROS". This system incorporates the basic concepts of Line Of Balance "LOB" and Critical Path Method "CPM" in a multi-objective Genetic Algorithms "GAs" model. The main objective of this system is to provide a practical support for typical construction planners who need to optimize resource utilization in order to minimize project cost and duration while maximizing its quality simultaneously. The application of these research developments in planning the typical construction projects holds a strong promise to: 1) Increase the efficiency of resource use in typical construction projects; 2) Reduce construction duration period; 3) Minimize construction cost (direct cost plus indirect cost); and 4) Improve the quality of newly construction projects. A general description of the proposed software for the Time-Cost-Quality Trade-Off "TCQTO" is presented. The main inputs and outputs of the proposed software are outlined. The main subroutines and the inference engine of this software are detailed. The complexity analysis of the software is discussed. In addition, the verification, and complexity of the proposed software are proved and tested using a real case study.
Keywords: Project management, typical (repetitive) large scale projects, line of balance, multi-objective optimization, genetic algorithms, time-cost-quality trade-offs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 306574 Parametric Approach for Reserve Liability Estimate in Mortgage Insurance
Authors: Rajinder Singh, Ram Valluru
Abstract:
Chain Ladder (CL) method, Expected Loss Ratio (ELR) method and Bornhuetter-Ferguson (BF) method, in addition to more complex transition-rate modeling, are commonly used actuarial reserving methods in general insurance. There is limited published research about their relative performance in the context of Mortgage Insurance (MI). In our experience, these traditional techniques pose unique challenges and do not provide stable claim estimates for medium to longer term liabilities. The relative strengths and weaknesses among various alternative approaches revolve around: stability in the recent loss development pattern, sufficiency and reliability of loss development data, and agreement/disagreement between reported losses to date and ultimate loss estimate. CL method results in volatile reserve estimates, especially for accident periods with little development experience. The ELR method breaks down especially when ultimate loss ratios are not stable and predictable. While the BF method provides a good tradeoff between the loss development approach (CL) and ELR, the approach generates claim development and ultimate reserves that are disconnected from the ever-to-date (ETD) development experience for some accident years that have more development experience. Further, BF is based on subjective a priori assumption. The fundamental shortcoming of these methods is their inability to model exogenous factors, like the economy, which impact various cohorts at the same chronological time but at staggered points along their life-time development. This paper proposes an alternative approach of parametrizing the loss development curve and using logistic regression to generate the ultimate loss estimate for each homogeneous group (accident year or delinquency period). The methodology was tested on an actual MI claim development dataset where various cohorts followed a sigmoidal trend, but levels varied substantially depending upon the economic and operational conditions during the development period spanning over many years. The proposed approach provides the ability to indirectly incorporate such exogenous factors and produce more stable loss forecasts for reserving purposes as compared to the traditional CL and BF methods.
Keywords: Actuarial loss reserving techniques, logistic regression, parametric function, volatility.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 41673 An Investigation into the Views of Gifted Children on the Effects of Computer and Information Technologies on Their Lives and Education
Authors: Ahmet Kurnaz, Eyup Yurt, Ümit Çiftci
Abstract:
In this study, too, an attempt was made to reveal the place and effects of information technologies on the lives and education of gifted children based on the views of gifted. To this end, the effects of information technologies on gifted are general skills, technology use, academic and social skills, and cooperative and personal skills were investigated. These skills were explored depending on whether or not gifted had their own computers, had internet connection at home, or how often they use the internet, average time period they spent at the computer, how often they played computer games and their use of social media. The study was conducted using the screening model with a quantitative approach. The sample of the study consisted of 129 gifted attending 5-12th classes in 12 provinces in different regions of Turkey. 64 of the participants were female while 65 were male. The research data were collected using the using computer of gifted and information technologies (UCIT) questionnaire which was developed by the researchers and given its final form after receiving expert view. As a result of the study, it was found that UCIT use improved foreign language speaking skills of gifted, enabled them to get to know and understand different cultures, and made use of computer and information technologies while they study. At the end of the study these result were obtained: Gifted have positive idea using computer and communication technology. There are differences whether using the internet about the ideas UCIT. But there are not differences whether having computer, inhabited city, grade level, having internet at home, daily and weekly internet usage durations, playing the computer and internet game, having Facebook and Twitter account about the UCIT. UCIT contribute to the development of gifted vocabulary, allows knowing and understand different cultures, developing foreign language speaking skills, gifted do not give up computer when they do their homework, improve their reading, listening, understanding and writing skills in a foreign language. Gifted children want to have transition to the use of tablets in education. They think UCIT facilitates doing their homework, contributes learning more information in a shorter time. They'd like to use computer-assisted instruction programs at courses. They think they will be more successful in the future if their computer skills are good. But gifted students prefer teacher instead of teaching with computers and they said that learning can be run from home without going to school.
Keywords: Gifted, using computer, communication technology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 178672 Nonlinear Transformation of Laser Generated Ultrasonic Pulses in Geomaterials
Authors: Elena B. Cherepetskaya, Alexander A. Karabutov, Natalia B. Podymova, Ivan Sas
Abstract:
Nonlinear evolution of broadband ultrasonic pulses passed through the rock specimens is studied using the apparatus “GEOSCAN-02M”. Ultrasonic pulses are excited by the pulses of Qswitched Nd:YAG laser with the time duration of 10 ns and with the energy of 260 mJ. This energy can be reduced to 20 mJ by some light filters. The laser beam radius did not exceed 5 mm. As a result of the absorption of the laser pulse in the special material – the optoacoustic generator–the pulses of longitudinal ultrasonic waves are excited with the time duration of 100 ns and with the maximum pressure amplitude of 10 MPa. The immersion technique is used to measure the parameters of these ultrasonic pulses passed through a specimen, the immersion liquid is distilled water. The reference pulse passed through the cell with water has the compression and the rarefaction phases. The amplitude of the rarefaction phase is five times lower than that of the compression phase. The spectral range of the reference pulse reaches 10 MHz. The cubic-shaped specimens of the Karelian gabbro are studied with the rib length 3 cm. The ultimate strength of the specimens by the uniaxial compression is (300±10) MPa. As the reference pulse passes through the area of the specimen without cracks the compression phase decreases and the rarefaction one increases due to diffraction and scattering of ultrasound, so the ratio of these phases becomes 2.3:1. After preloading some horizontal cracks appear in the specimens. Their location is found by one-sided scanning of the specimen using the backward mode detection of the ultrasonic pulses reflected from the structure defects. Using the computer processing of these signals the images are obtained of the cross-sections of the specimens with cracks. By the increase of the reference pulse amplitude from 0.1 MPa to 5 MPa the nonlinear transformation of the ultrasonic pulse passed through the specimen with horizontal cracks results in the decrease by 2.5 times of the amplitude of the rarefaction phase and in the increase of its duration by 2.1 times. By the increase of the reference pulse amplitude from 5 MPa to 10 MPa the time splitting of the phases is observed for the bipolar pulse passed through the specimen. The compression and rarefaction phases propagate with different velocities. These features of the powerful broadband ultrasonic pulses passed through the rock specimens can be described by the hysteresis model of Preisach- Mayergoyz and can be used for the location of cracks in the optically opaque materials.Keywords: Cracks, geological materials, nonlinear evolution of ultrasonic pulses, rock.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 189571 Performance Study of Neodymium Extraction by Carbon Nanotubes Assisted Emulsion Liquid Membrane Using Response Surface Methodology
Authors: Payman Davoodi-Nasab, Ahmad Rahbar-Kelishami, Jaber Safdari, Hossein Abolghasemi
Abstract:
The high purity rare earth elements (REEs) have been vastly used in the field of chemical engineering, metallurgy, nuclear energy, optical, magnetic, luminescence and laser materials, superconductors, ceramics, alloys, catalysts, and etc. Neodymium is one of the most abundant rare earths. By development of a neodymium–iron–boron (Nd–Fe–B) permanent magnet, the importance of neodymium has dramatically increased. Solvent extraction processes have many operational limitations such as large inventory of extractants, loss of solvent due to the organic solubility in aqueous solutions, volatilization of diluents, etc. One of the promising methods of liquid membrane processes is emulsion liquid membrane (ELM) which offers an alternative method to the solvent extraction processes. In this work, a study on Nd extraction through multi-walled carbon nanotubes (MWCNTs) assisted ELM using response surface methodology (RSM) has been performed. The ELM composed of diisooctylphosphinic acid (CYANEX 272) as carrier, MWCNTs as nanoparticles, Span-85 (sorbitan triooleate) as surfactant, kerosene as organic diluent and nitric acid as internal phase. The effects of important operating variables namely, surfactant concentration, MWCNTs concentration, and treatment ratio were investigated. Results were optimized using a central composite design (CCD) and a regression model for extraction percentage was developed. The 3D response surfaces of Nd(III) extraction efficiency were achieved and significance of three important variables and their interactions on the Nd extraction efficiency were found out. Results indicated that introducing the MWCNTs to the ELM process led to increasing the Nd extraction due to higher stability of membrane and mass transfer enhancement. MWCNTs concentration of 407 ppm, Span-85 concentration of 2.1 (%v/v) and treatment ratio of 10 were achieved as the optimum conditions. At the optimum condition, the extraction of Nd(III) reached the maximum of 99.03%.Keywords: Emulsion liquid membrane, extraction of neodymium, multi-walled carbon nanotubes, response surface method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 125870 Climate Related Financial Risk for Automobile Industry and Impact to Financial Institutions
Authors: S. Mahalakshmi, B. Senthil Arasu
Abstract:
As per the recent changes happening in the global policies, climate related changes and the impact it causes across every sector are viewed as green swan events – in essence, climate related changes can happen often and lead to risk and lot of uncertainty, but need to be mitigated instead of considering them as black swan events. This brings about a question on how this risk can be computed, so that the financial institutions can plan to mitigate it. Climate related changes impact all risk types – credit risk, market risk, operational risk, liquidity risk, reputational risk and others. And the models required to compute this have to consider the different industrial needs of the counterparty, as well as the factors that are contributing to this – be it in the form of different risk drivers, or the different transmission channels or the different approaches and the granular form of data availability. This brings out to the suggestion that the climate related changes, though it affects Pillar I risks, will be a Pillar II risk. This has to be modeled specifically based on the financial institution’s actual exposure to different industries, instead of generalizing the risk charge. And this will have to be considered as the additional capital to be met by the financial institution in addition to their Pillar I risks, as well as the existing Pillar II risks. In this paper, we present a risk assessment framework to model and assess climate change risks - for both credit and market risks. This framework helps in assessing the different scenarios, and how the different transition risks affect the risk associated with the different parties. This research paper delves on the topic of increase in concentration of greenhouse gases, that in turn causing global warming. It then considers the various scenarios of having the different risk drivers impacting credit and market risk of an institution, by understanding the transmission channels, and also considering the transition risk. The paper then focuses on the industry that’s fast seeing a disruption: automobile industry. The paper uses the framework to show how the climate changes and the change to the relevant policies have impacted the entire financial institution. Appropriate statistical models for forecasting, anomaly detection and scenario modeling are built to demonstrate how the framework can be used by the relevant agencies to understand their financial risks. The paper also focuses on the climate risk calculation for the Pillar II capital calculations, and how it will make sense for the bank to maintain this in addition to their regular Pillar I and Pillar II capital.
Keywords: Capital calculation, climate risk, credit risk, pillar II risk, scenario modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42369 Calculation of the Thermal Stresses in an Elastoplastic Plate Heated by Local Heat Source
Authors: M. Khaing, A. V. Tkacheva
Abstract:
The work is devoted to solving the problem of temperature stresses, caused by the heating point of the round plate. The plate is made of elastoplastic material, so the Prandtl-Reis model is used. A piecewise-linear condition of the Ishlinsky-Ivlev flow is taken as the loading surface, in which the yield stress depends on the temperature. Piecewise-linear conditions (Treska or Ishlinsky-Ivlev), in contrast to the Mises condition, make it possible to obtain solutions of the equilibrium equation in an analytical form. In the problem under consideration, using the conditions of Tresca, it is impossible to obtain a solution. This is due to the fact that the equation of equilibrium ceases to be satisfied when the two Tresca conditions are fulfilled at once. Using the conditions of plastic flow Ishlinsky-Ivlev allows one to solve the problem. At the same time, there are also no solutions on the edge of the Ishlinsky-Ivlev hexagon in the plane-stressed state. Therefore, the authors of the article propose to jump from the edge to the edge of the mine edge, which gives an opportunity to obtain an analytical solution. At the same time, there is also no solution on the edge of the Ishlinsky-Ivlev hexagon in a plane stressed state; therefore, in this paper, the authors of the article propose to jump from the side to the side of the mine edge, which gives an opportunity to receive an analytical solution. The paper compares solutions of the problem of plate thermal deformation. One of the solutions was obtained under the condition that the elastic moduli (Young's modulus, Poisson's ratio) which depend on temperature. The yield point is assumed to be parabolically temperature dependent. The main results of the comparisons are that the region of irreversible deformation is larger in the calculations obtained for solving the problem with constant elastic moduli. There is no repeated plastic flow in the solution of the problem with elastic moduli depending on temperature. The absolute value of the irreversible deformations is higher for the solution of the problem in which the elastic moduli are constant; there are also insignificant differences in the distribution of the residual stresses.Keywords: Temperature stresses, elasticity, plasticity, Ishlinsky-Ivlev condition, plate, annular heating, elastic moduli.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 72868 Probabilistic Life Cycle Assessment of the Nano Membrane Toilet
Authors: A. Anastasopoulou, A. Kolios, T. Somorin, A. Sowale, Y. Jiang, B. Fidalgo, A. Parker, L. Williams, M. Collins, E. J. McAdam, S. Tyrrel
Abstract:
Developing countries are nowadays confronted with great challenges related to domestic sanitation services in view of the imminent water scarcity. Contemporary sanitation technologies established in these countries are likely to pose health risks unless waste management standards are followed properly. This paper provides a solution to sustainable sanitation with the development of an innovative toilet system, called Nano Membrane Toilet (NMT), which has been developed by Cranfield University and sponsored by the Bill & Melinda Gates Foundation. The particular technology converts human faeces into energy through gasification and provides treated wastewater from urine through membrane filtration. In order to evaluate the environmental profile of the NMT system, a deterministic life cycle assessment (LCA) has been conducted in SimaPro software employing the Ecoinvent v3.3 database. The particular study has determined the most contributory factors to the environmental footprint of the NMT system. However, as sensitivity analysis has identified certain critical operating parameters for the robustness of the LCA results, adopting a stochastic approach to the Life Cycle Inventory (LCI) will comprehensively capture the input data uncertainty and enhance the credibility of the LCA outcome. For that purpose, Monte Carlo simulations, in combination with an artificial neural network (ANN) model, have been conducted for the input parameters of raw material, produced electricity, NOX emissions, amount of ash and transportation of fertilizer. The given analysis has provided the distribution and the confidence intervals of the selected impact categories and, in turn, more credible conclusions are drawn on the respective LCIA (Life Cycle Impact Assessment) profile of NMT system. Last but not least, the specific study will also yield essential insights into the methodological framework that can be adopted in the environmental impact assessment of other complex engineering systems subject to a high level of input data uncertainty.Keywords: Sanitation systems, nano membrane toilet, LCA, stochastic uncertainty analysis, Monte Carlo Simulations, artificial neural network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 98867 Parametric Non-Linear Analysis of Reinforced Concrete Frames with Supplemental Damping Systems
Authors: Daniele Losanno, Giorgio Serino
Abstract:
This paper focuses on parametric analysis of reinforced concrete structures equipped with supplemental damping braces. Practitioners still luck sufficient data for current design of damper added structures and often reduce the real model to a pure damper braced structure even if this assumption is neither realistic nor conservative. In the present study, the damping brace is modelled as made by a linear supporting brace connected in series with the viscous/hysteretic damper. Deformation capacity of existing structures is usually not adequate to undergo the design earthquake. In spite of this, additional dampers could be introduced strongly limiting structural damage to acceptable values, or in some cases, reducing frame response to elastic behavior. This work is aimed at providing useful considerations for retrofit of existing buildings by means of supplemental damping braces. The study explicitly takes into consideration variability of (a) relative frame to supporting brace stiffness, (b) dampers’ coefficient (viscous coefficient or yielding force) and (c) non-linear frame behavior. Non-linear time history analysis has been run to account for both dampers’ behavior and non-linear plastic hinges modelled by Pivot hysteretic type. Parametric analysis based on previous studies on SDOF or MDOF linear frames provide reference values for nearly optimal damping systems design. With respect to bare frame configuration, seismic response of the damper-added frame is strongly improved, limiting deformations to acceptable values far below ultimate capacity. Results of the analysis also demonstrated the beneficial effect of stiffer supporting braces, thus highlighting inadequacy of simplified pure damper models. At the same time, the effect of variable damping coefficient and yielding force has to be treated as an optimization problem.
Keywords: Brace stiffness, dissipative braces, non-linear analysis, plastic hinges, reinforced concrete.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 91366 Emerging VC Industry: Do Market Expectations Play the Most Important Role in Project Selection? Evidence on Russian Data
Authors: I. Rodionov, A. Semenov, E. Gosteva, O. Sokolova
Abstract:
The venture capital becomes more and more advanced and effective source of the innovation project financing, connected with a high-risk level. In the developed countries, it plays a key role in transforming innovation projects into successful businesses and creating the prosperity of the modern economy. In Russia, there are many necessary preconditions for creation of the effective venture investment system: the network of the public institutes for innovation financing operates; there is a significant number of the small and medium-sized enterprises, capable to sell production with good market potential. However, the current system does not confirm the necessary level of efficiency in practice that can be substantially explained by the absence of the accurate plan of action to form the national venture model and by the lack of experience of successful venture deals with profitable exits in Russian economy. This paper studies the influence of various factors on the venture industry development by the example of the IT-sector in Russia. The choice of the sector is based on the fact, that this segment is the main driver of the venture capital market growth in Russia, and the necessary set of data exists. The size of investment of the second round is used as the dependent variable. To analyse the influence of the previous round, such determinant as the volume of the previous (first) round investments is used. There is also used a dummy variable in regression to examine that the participation of an investor with high reputation and experience in the previous round can influence the size of the next investment round. The regression analysis of short-term interrelations between studied variables reveals prevailing influence of the volume of the first round investments on the venture investments volume of the second round. The most important determinant of the value of the second-round investment is the value of first–round investment, so it means that the most competitive on the Russian market are the start-up teams that can attract more money on the start, and the target market growth is not the factor of crucial importance. This supports the point of view that VC in Russia is driven by endogenous factors and not by exogenous ones that are based on global market growth.Keywords: Venture industry, venture investment, determinants of the venture sector development, IT-sector.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 155865 Florida’s Groundwater and Surface Water System Reliability in Terms of Climate Change and Sea-Level Rise
Authors: Rahman Davtalab, Saba Ghotbi
Abstract:
Florida is one of the most vulnerable states to natural disasters among the 50 states of the USA. The state exposed by tropical storms, hurricanes, storm surge, landslide, etc. Besides the mentioned natural phenomena, global warming, sea-level rise, and other anthropogenic environmental changes make a very complicated and unpredictable system for decision-makers. In this study, we tried to highlight the effects of climate change and sea-level rise on surface water and groundwater systems for three different geographical locations in Florida; Main Canal of Jacksonville Beach in the northeast of Florida adjacent to the Atlantic Ocean, Grace Lake in central Florida, far away from surrounded coastal line, and Mc Dill in Florida and adjacent to Tampa Bay and Mexican Gulf. An integrated hydrologic and hydraulic model was developed and simulated for all three cases, including surface water, groundwater, or a combination of both. For the case study of Main Canal-Jacksonville Beach, the investigation showed that a 76 cm sea-level rise in time horizon 2060 could increase the flow velocity of the tide cycle for the main canal's outlet and headwater. This case also revealed how the sea level rise could change the tide duration, potentially affecting the coastal ecosystem. As expected, sea-level rise can raise the groundwater level. Therefore, for the Mc Dill case, the effect of groundwater rise on soil storage and the performance of stormwater retention ponds is investigated. The study showed that sea-level rise increased the pond’s seasonal high water up to 40 cm by time horizon 2060. The reliability of the retention pond is dropped from 99% for the current condition to 54% for the future. The results also proved that the retention pond could not retain and infiltrate the designed treatment volume within 72 hours, which is a significant indication of increasing pollutants in the future. Grace Lake case study investigates the effects of climate change on groundwater recharge. This study showed that using the dynamically downscaled data of the groundwater recharge can decline up to 24 % by the mid-21st century.
Keywords: groundwater, surface water, Florida, retention pond, tide, sea-level rise
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 59064 In-Flight Radiometric Performances Analysis of an Airborne Optical Payload
Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yaokai Liu, Xinhong Wang, Yongsheng Zhou
Abstract:
Performances analysis of remote sensing sensor is required to pursue a range of scientific research and application objectives. Laboratory analysis of any remote sensing instrument is essential, but not sufficient to establish a valid inflight one. In this study, with the aid of the in situ measurements and corresponding image of three-gray scale permanent artificial target, the in-flight radiometric performances analyses (in-flight radiometric calibration, dynamic range and response linearity, signal-noise-ratio (SNR), radiometric resolution) of self-developed short-wave infrared (SWIR) camera are performed. To acquire the inflight calibration coefficients of the SWIR camera, the at-sensor radiances (Li) for the artificial targets are firstly simulated with in situ measurements (atmosphere parameter and spectral reflectance of the target) and viewing geometries using MODTRAN model. With these radiances and the corresponding digital numbers (DN) in the image, a straight line with a formulation of L = G × DN + B is fitted by a minimization regression method, and the fitted coefficients, G and B, are inflight calibration coefficients. And then the high point (LH) and the low point (LL) of dynamic range can be described as LH= (G × DNH + B) and LL= B, respectively, where DNH is equal to 2n − 1 (n is the quantization number of the payload). Meanwhile, the sensor’s response linearity (δ) is described as the correlation coefficient of the regressed line. The results show that the calibration coefficients (G and B) are 0.0083 W·sr−1m−2µm−1 and −3.5 W·sr−1m−2µm−1; the low point of dynamic range is −3.5 W·sr−1m−2µm−1 and the high point is 30.5 W·sr−1m−2µm−1; the response linearity is approximately 99%. Furthermore, a SNR normalization method is used to assess the sensor’s SNR, and the normalized SNR is about 59.6 when the mean value of radiance is equal to 11.0 W·sr−1m−2µm−1; subsequently, the radiometric resolution is calculated about 0.1845 W•sr-1m-2μm-1. Moreover, in order to validate the result, a comparison of the measured radiance with a radiative-transfer-code-predicted over four portable artificial targets with reflectance of 20%, 30%, 40%, 50% respectively, is performed. It is noted that relative error for the calibration is within 6.6%.
Keywords: Calibration, dynamic range, radiometric resolution, SNR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 134063 C-LNRD: A Cross-Layered Neighbor Route Discovery for Effective Packet Communication in Wireless Sensor Network
Authors: K. Kalaikumar, E. Baburaj
Abstract:
One of the problems to be addressed in wireless sensor networks is the issues related to cross layer communication. Cross layer architecture shares the information across the layer, ensuring Quality of Services (QoS). With this shared information, MAC protocol adapts effective functionality maintenance such as route selection on changeable sensor network environment. However, time slot assignment and neighbour route selection time duration for cross layer have not been carried out. The time varying physical layer communication over cross layer causes high traffic load in the sensor network. Though, the traffic load was reduced using cross layer optimization procedure, the computational cost is high. To improve communication efficacy in the sensor network, a self-determined time slot based Cross-Layered Neighbour Route Discovery (C-LNRD) method is presented in this paper. In the presented work, the initial process is to discover the route in the sensor network using Dynamic Source Routing based Medium Access Control (MAC) sub layers. This process considers MAC layer operation with dynamic route neighbour table discovery. Then, the discovered route path for packet communication employs Broad Route Distributed Time Slot Assignment method on Cross-Layered Sensor Network system. Broad Route means time slotting on varying length of the route paths. During packet communication in this sensor network, transmission of packets is adjusted over the different time with varying ranges for controlling the traffic rate. Finally, Rayleigh fading model is developed in C-LNRD to identify the performance of the sensor network communication structure. The main task of Rayleigh Fading is to measure the power level of each communication under MAC sub layer. The minimized power level helps to easily reduce the computational cost of packet communication in the sensor network. Experiments are conducted on factors such as power factor, on packet communication, neighbour route discovery time, and information (i.e., packet) propagation speed.
Keywords: Medium access control, neighbour route discovery, wireless sensor network, Rayleigh fading, distributed time slot assignment
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 77462 Low Energy Technology for Leachate Valorisation
Authors: Jesús M. Martín, Francisco Corona, Dolores Hidalgo
Abstract:
Landfills present long-term threats to soil, air, groundwater and surface water due to the formation of greenhouse gases (methane gas and carbon dioxide) and leachate from decomposing garbage. The composition of leachate differs from site to site and also within the landfill. The leachates alter with time (from weeks to years) since the landfilled waste is biologically highly active and their composition varies. Mainly, the composition of the leachate depends on factors such as characteristics of the waste, the moisture content, climatic conditions, degree of compaction and the age of the landfill. Therefore, the leachate composition cannot be generalized and the traditional treatment models should be adapted in each case. Although leachate composition is highly variable, what different leachates have in common is hazardous constituents and their potential eco-toxicological effects on human health and on terrestrial ecosystems. Since leachate has distinct compositions, each landfill or dumping site would represent a different type of risk on its environment. Nevertheless, leachates consist always of high organic concentration, conductivity, heavy metals and ammonia nitrogen. Leachate could affect the current and future quality of water bodies due to uncontrolled infiltrations. Therefore, control and treatment of leachate is one of the biggest issues in urban solid waste treatment plants and landfills design and management. This work presents a treatment model that will be carried out "in-situ" using a cost-effective novel technology that combines solar evaporation/condensation plus forward osmosis. The plant is powered by renewable energies (solar energy, biomass and residual heat), which will minimize the carbon footprint of the process. The final effluent quality is very high, allowing reuse (preferred) or discharge into watercourses. In the particular case of this work, the final effluents will be reused for cleaning and gardening purposes. A minority semi-solid residual stream is also generated in the process. Due to its special composition (rich in metals and inorganic elements), this stream will be valorized in ceramic industries to improve the final products characteristics.
Keywords: Forward osmosis, landfills, leachate valorization, solar evaporation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 95561 Towards Real-Time Classification of Finger Movement Direction Using Encephalography Independent Components
Authors: Mohamed Mounir Tellache, Hiroyuki Kambara, Yasuharu Koike, Makoto Miyakoshi, Natsue Yoshimura
Abstract:
This study explores the practicality of using electroencephalographic (EEG) independent components to predict eight-direction finger movements in pseudo-real-time. Six healthy participants with individual-head MRI images performed finger movements in eight directions with two different arm configurations. The analysis was performed in two stages. The first stage consisted of using independent component analysis (ICA) to separate the signals representing brain activity from non-brain activity signals and to obtain the unmixing matrix. The resulting independent components (ICs) were checked, and those reflecting brain-activity were selected. Finally, the time series of the selected ICs were used to predict eight finger-movement directions using Sparse Logistic Regression (SLR). The second stage consisted of using the previously obtained unmixing matrix, the selected ICs, and the model obtained by applying SLR to classify a different EEG dataset. This method was applied to two different settings, namely the single-participant level and the group-level. For the single-participant level, the EEG dataset used in the first stage and the EEG dataset used in the second stage originated from the same participant. For the group-level, the EEG datasets used in the first stage were constructed by temporally concatenating each combination without repetition of the EEG datasets of five participants out of six, whereas the EEG dataset used in the second stage originated from the remaining participants. The average test classification results across datasets (mean ± S.D.) were 38.62 ± 8.36% for the single-participant, which was significantly higher than the chance level (12.50 ± 0.01%), and 27.26 ± 4.39% for the group-level which was also significantly higher than the chance level (12.49% ± 0.01%). The classification accuracy within [–45°, 45°] of the true direction is 70.03 ± 8.14% for single-participant and 62.63 ± 6.07% for group-level which may be promising for some real-life applications. Clustering and contribution analyses further revealed the brain regions involved in finger movement and the temporal aspect of their contribution to the classification. These results showed the possibility of using the ICA-based method in combination with other methods to build a real-time system to control prostheses.Keywords: Brain-computer interface, BCI, electroencephalography, EEG, finger motion decoding, independent component analysis, pseudo-real-time motion decoding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 59960 Influence of Sire Breed, Protein Supplementation and Gender on Wool Spinning Fineness in First-Cross Merino Lambs
Authors: A. E. O. Malau-Aduli, B. W. B. Holman, P. A. Lane
Abstract:
Our objectives were to evaluate the effects of sire breed, type of protein supplement, level of supplementation and sex on wool spinning fineness (SF), its correlations with other wool characteristics and prediction accuracy in F1 Merino crossbred lambs. Texel, Coopworth, White Suffolk, East Friesian and Dorset rams were mated with 500 purebred Merino dams at a ratio of 1:100 in separate paddocks within a single management system. The F1 progeny were raised on ryegrass pasture until weaning, before forty lambs were randomly allocated to treatments in a 5 x 2 x 2 x 2 factorial experimental design representing 5 sire breeds, 2 supplementary feeds (canola or lupins), 2 levels of supplementation (1% or 2% of liveweight) and sex (wethers or ewes). Lambs were supplemented for six weeks after an initial three weeks of adjustment, wool sampled at the commencement and conclusion of the feeding trial and analyzed for SF, mean fibre diameter (FD), coefficient of variation (CV), standard deviation, comfort factor (CF), fibre curvature (CURV), and clean fleece yield. Data were analyzed using mixed linear model procedures with sire fitted as a random effect, and sire breed, sex, supplementary feed type, level of supplementation and their second-order interactions as fixed effects. Sire breed (P<0.001), sex (P<0.004), sire breed x level of supplementation (P<0.004), and sire breed x sex (P<0.019) interactions significantly influenced SF. SF ranged from 22.7 ± 0.2μm in White Suffolk-sired lambs to 25.1 ± 0.2μm in East Friesian crossbred lambs. Ewes had higher SF than wethers. There were significant (P<0.001) correlations between SF and FD (0.93), CV (0.40), CF (-0.94) and CURV (-0.12). Its strong relationship with other wool quality traits enabled accurate predictions explaining up to about 93% of the observed variation. The interactions between sire breed genetics and nutrition will have an impact on the choices that dual-purpose sheep producers make when selecting sire breeds and protein supplementary feed levels to achieve optimal wool spinning fineness at the farmgate level. This will facilitate selective breeding programs being able to better account for SF and its interactions with other wool characteristics.Keywords: Merino crossbred sheep, protein supplementation, sire breed, wool quality, wool spinning fineness
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 217959 Statistical Modeling of Constituents in Ash Evolved From Pulverized Coal Combustion
Authors: Esam Jassim
Abstract:
Industries using conventional fossil fuels have an interest in better understanding the mechanism of particulate formation during combustion since such is responsible for emission of undesired inorganic elements that directly impact the atmospheric pollution level. Fine and ultrafine particulates have tendency to escape the flue gas cleaning devices to the atmosphere. They also preferentially collect on surfaces in power systems resulting in ascending in corrosion inclination, descending in the heat transfer thermal unit, and severe impact on human health. This adverseness manifests particularly in the regions of world where coal is the dominated source of energy for consumption. This study highlights the behavior of calcium transformation as mineral grains verses organically associated inorganic components during pulverized coal combustion. The influence of existing type of calcium on the coarse, fine and ultrafine mode formation mechanisms is also presented. The impact of two sub-bituminous coals on particle size and calcium composition evolution during combustion is to be assessed. Three mixed blends named Blends 1, 2, and 3 are selected according to the ration of coal A to coal B by weight. Calcium percentage in original coal increases as going from Blend 1 to 3. A mathematical model and a new approach of describing constituent distribution are proposed. Analysis of experiments of calcium distribution in ash is also modeled using Poisson distribution. A novel parameter, called elemental index λ, is introduced as a measuring factor of element distribution. Results show that calcium in ash that originally in coal as mineral grains has index of 17, whereas organically associated calcium transformed to fly ash shown to be best described when elemental index λ is 7. As an alkaline-earth element, calcium is considered the fundamental element responsible for boiler deficiency since it is the major player in the mechanism of ash slagging process. The mechanism of particle size distribution and mineral species of ash particles are presented using CCSEM and size-segregated ash characteristics. Conclusions are drawn from the analysis of pulverized coal ash generated from a utility-scale boiler.
Keywords: Calcium transformation, Coal Combustion, Inorganic Element, Poisson distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 195758 Magnitude and Determinants of Overweight and Obesity among High School Adolescents in Addis Ababa, Ethiopia
Authors: Mulugeta Shegaze, Mekitie Wondafrash, Alemayehu A. Alemayehu, Shikur Mohammed, Zewdu Shewangezaw, Mukerem Abdo, Gebresilasea Gendisha
Abstract:
Background: The 2004 World Health Assembly called for specific actions to halt the overweight and obesity epidemic that is currently penetrating urban populations in the developing world. Adolescents require particular attention due to their vulnerability to develop obesity and the fact that adolescent weight tracks strongly into adulthood. However, there is scarcity of information on the modifiable risk factors to be targeted for primary intervention among urban adolescents in Ethiopia. This study was aimed at determining the magnitude and risk factors of overweight and obesity among high school adolescents in Addis Ababa. Methods: An institution-based cross-sectional study was conducted in February and March 2014 on 456 randomly selected adolescents from 20 high schools in Addis Ababa city. Demographic data and other risk factors of overweight and obesity were collected using self-administered structured questionnaire, whereas anthropometric measurements of weight and height were taken using calibrated equipment and standardized techniques. The WHO STEPS instrument for chronic disease risk was applied to assess dietary habit and physical activity. Overweight and obesity status was determined based on BMI-for-age percentiles of WHO 2007 reference population. Results: The prevalence rates of overweight, obesity, and overall overweight/ obesity among high school adolescents in Addis Ababa were 9.7% (95%CI = 6.9-12.4%), 4.2% (95%CI = 2.3-6.0%), and 13.9% (95%CI = 10.6-17.1%), respectively. Overweight/obesity prevalence was highest among female adolescents, in private schools, and in the higher wealth category. In multivariable regression model, being female [AOR(95%CI) = 5.4(2.5,12.1)], being from private school [AOR(95%CI) = 3.0(1.4,6.2)], having >3 regular meals [AOR(95%CI) = 4.0(1.3,13.0)], consumption of sweet foods [AOR(95%CI) = 5.0(2.4,10.3)] and spending >3 hours/day sitting [AOR(95%CI) = 3.5(1.7,7.2)] were found to increase overweight/ obesity risk, whereas high Total Physical Activity level [AOR(95%CI) = 0.21(0.08,0.57)] and better nutrition knowledge [AOR(95%CI) = 0.160.07,0.37)] were found protective. Conclusions: More than one in ten of the high school adolescents were affected by overweight/obesity with dietary habit and physical activity are important modifiable risk factors. Well-tailored nutrition education program targeting lifestyle change should be initiated with more emphasis to female adolescents and students in private schools.Keywords: Adolescents, NCDs, overweight, obesity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 259457 Advanced Compound Coating for Delaying Corrosion of Fast-Dissolving Alloy in High Temperature and Corrosive Environment
Authors: Lei Zhao, Yi Song, Tim Dunne, Jiaxiang (Jason) Ren, Wenhan Yue, Lei Yang, Li Wen, Yu Liu
Abstract:
Fasting dissolving magnesium (DM) alloy technology has contributed significantly to the “Shale Revolution” in oil and gas industry. This application requires DM downhole tools dissolving initially at a slow rate, rapidly accelerating to a high rate after certain period of operation time (typically 8 h to 2 days), a contradicting requirement that can hardly be addressed by traditional Mg alloying or processing itself. Premature disintegration has been broadly reported in downhole DM tool from field trials. To address this issue, “temporary” thin polymers of various formulations are currently coated onto DM surface to delay its initial dissolving. Due to conveying parts, harsh downhole condition, and high dissolving rate of the base material, the current delay coatings relying on pure polymers are found to perform well only at low temperature (typical < 100 ℃) and parts without sharp edges or corners, as severe geometries prevent high quality thin film coatings from forming effectively. In this study, a coating technology combining Plasma Electrolytic Oxide (PEO) coatings with advanced thin film deposition has been developed, which can delay DM complex parts (with sharp corners) in corrosive fluid at 150 ℃ for over 2 days. Synergistic effects between porous hard PEO coating and chemical inert elastic-polymer sealing leads to its delaying dissolution improvement, and strong chemical/physical bonding between these two layers has been found to play essential role. Microstructure of this advanced coating and compatibility between PEO and various polymer selections has been thoroughly investigated and a model is also proposed to explain its delaying performance. This study could not only benefit oil and gas industry to unplug their High Temperature High Pressure (HTHP) unconventional resources inaccessible before, but also potentially provides a technical route for other industries (e.g., bio-medical, automobile, aerospace) where primer anti-corrosive protection on light Mg alloy is highly demanded.
Keywords: Dissolvable magnesium, coating, plasma electrolytic oxide, sealer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 58056 The Mechanism Underlying Empathy-Related Helping Behavior: An Investigation of Empathy-Attitude- Action Model
Authors: Wan-Ting Liao, Angela K. Tzeng
Abstract:
Empathy has been an important issue in psychology, education, as well as cognitive neuroscience. Empathy has two major components: cognitive and emotional. Cognitive component refers to the ability to understand others’ perspectives, thoughts, and actions, whereas emotional component refers to understand how others feel. Empathy can be induced, attitude can then be changed, and with enough attitude change, helping behavior can occur. This finding leads us to two questions: is attitude change really necessary for prosocial behavior? And, what roles cognitive and affective empathy play? For the second question, participants with different psychopathic personality (PP) traits are critical because high PP people were found to suffer only affective empathy deficit. Their cognitive empathy shows no significant difference from the control group. 132 college students voluntarily participated in the current three-stage study. Stage 1 was to collect basic information including Interpersonal Reactivity Index (IRI), Psychopathic Personality Inventory-Revised (PPI-R), Attitude Scale, Visual Analogue Scale (VAS), and demographic data. Stage two was for empathy induction with three controversial scenarios, namely domestic violence, depression with a suicide attempt, and an ex-offender. Participants read all three stories and then rewrite the stories by one of two perspectives (empathetic vs. objective). They would then complete the VAS and Attitude Scale one more time for their post-attitude and emotional status. Three IVs were introduced for data analysis: PP (High vs. Low), Responsibility (whether or not the character is responsible for what happened), and Perspective-taking (Empathic vs. Objective). Stage 3 was for the action. Participants were instructed to freely use the 17 tokens they received as donations. They were debriefed and interviewed at the end of the experiment. The major findings were people with higher empathy tend to take more action in helping. Attitude change is not necessary for prosocial behavior. The controversy of the scenarios and how familiar participants are towards target groups play very important roles. Finally, people with high PP tend to show more public prosocial behavior due to their affective empathy deficit. Pre-existing value and belief as well as recent dramatic social events seem to have a big impact and possibly reduce the effect of the independent variables (IV) in our paradigm.
Keywords: Affective empathy, attitude, cognitive empathy, prosocial behavior, psychopathic traits.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 71255 Education Quality Development for Excellence Performance with Higher Education by Using COBIT 5
Authors: Kemkanit Sanyanunthana
Abstract:
The purpose of this research is to study the management system of information technology which supports the education of five private universities in Thailand, according to the case studies which have been developing their qualities and standards of management and education by service provision of information technology to support the excellence performance. The concept to connect information technology with a suitable system has been created by information technology administrators for development, as a system that can be used throughout the organizations to help reach the utmost benefits of using all resources. Hence, the researcher as a person who has been performing these duties within higher education is interested to do this research by selecting the Control Objective for Information and Related Technology 5 (COBIT 5) for the Malcolm Baldrige National Quality Award (MBNQA) of America, or the National Award which applies the concept of Total Quality Management (TQM) to the organization evaluation. Such evaluation is called the Education Criteria for Performance Excellence (EdPEx) focuses on studying and comparing education quality development for excellent performance using COBIT 5 in terms of information technology to study the problems and obstacles of the investigation process for an information technology system, which is considered as an instrument to drive all organizations to reach the excellence performance of the information technology, and to be the model of evaluation and analysis of the process to be in accordance with the strategic plans of the information technology in the universities. This research is conducted in the form of descriptive and survey research according to the case studies. The data collection were carried out by using questionnaires through the administrators working related to the information technology field, and the research documents related to the change management as the main study. The research can be concluded that the performance based on the APO domain process (ALIGN, PLAN AND ORGANISE) of the COBIT 5 standard frame, which emphasizes concordant governance and management of strategic plans for the organizations, could reach only 95%. This might be because of some restrictions such as organizational cultures; therefore, the researcher has studied and analyzed the management of information technology in universities as a whole, under the organizational structures, to reach the performance in accordance with the overall APO domain which would affect the determined strategic plans to be able to develop based on the excellence performance of information technology, and to apply the risk management system at the organizational level into every performance process which would develop the work effectiveness for the resources management of information technology to reach the utmost benefits.
Keywords: COBIT 5, APO, EdPEx Criteria, MBNQA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 148854 Towards End-To-End Disease Prediction from Raw Metagenomic Data
Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker
Abstract:
Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.Keywords: Metagenomics, phenotype prediction, deep learning, embeddings, multiple instance learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 91053 Safety Climate Assessment and Its Impact on the Productivity of Construction Enterprises
Authors: Krzysztof J. Czarnocki, F. Silveira, E. Czarnocka, K. Szaniawska
Abstract:
Research background: Problems related to the occupational health and decreasing level of safety occur commonly in the construction industry. Important factor in the occupational safety in construction industry is scaffold use. All scaffolds used in construction, renovation, and demolition shall be erected, dismantled and maintained in accordance with safety procedure. Increasing demand for new construction projects unfortunately still is linked to high level of occupational accidents. Therefore, it is crucial to implement concrete actions while dealing with scaffolds and risk assessment in construction industry, the way on doing assessment and liability of assessment is critical for both construction workers and regulatory framework. Unfortunately, professionals, who tend to rely heavily on their own experience and knowledge when taking decisions regarding risk assessment, may show lack of reliability in checking the results of decisions taken. Purpose of the article: The aim was to indicate crucial parameters that could be modeling with Risk Assessment Model (RAM) use for improving both building enterprise productivity and/or developing potential and safety climate. The developed RAM could be a benefit for predicting high-risk construction activities and thus preventing accidents occurred based on a set of historical accident data. Methodology/Methods: A RAM has been developed for assessing risk levels as various construction process stages with various work trades impacting different spheres of enterprise activity. This project includes research carried out by teams of researchers on over 60 construction sites in Poland and Portugal, under which over 450 individual research cycles were carried out. The conducted research trials included variable conditions of employee exposure to harmful physical and chemical factors, variable levels of stress of employees and differences in behaviors and habits of staff. Genetic modeling tool has been used for developing the RAM. Findings and value added: Common types of trades, accidents, and accident causes have been explored, in addition to suitable risk assessment methods and criteria. We have found that the initial worker stress level is more direct predictor for developing the unsafe chain leading to the accident rather than the workload, or concentration of harmful factors at the workplace or even training frequency and management involvement.
Keywords: Civil engineering, occupational health, productivity, safety climate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 111552 Resolving a Piping Vibration Problem by Installing Viscous Damper Supports
Authors: Carlos Herrera Sierralta, Husain M. Muslim, Meshal T. Alsaiari, Daniel Fischer
Abstract:
The vast majority of piping vibration problems in the Oil & Gas industry are provoked by the process flow characteristics which are basically related to the fluid properties, the type of service and its different operational scenarios. In general, the corrective actions recommended for flow induced vibration in piping systems can be grouped in two major areas: those which affect the excitation mechanisms typically associated to process variables, and those which affect the response mechanism of the pipework per se. Where possible the first option is to try to solve the flow induced problem from the excitation mechanism perspective. However, in producing facilities the approach of changing process parameters might not always be convenient as it could lead to reduction of production rates or it may require the shutdown of the system. That impediment might lead to a second option, which is to modify the response of the piping system to excitation generated by the process flow. In principle, the action of shifting the natural frequency of the system well above the frequency inherent to the process always favours the elimination, or considerably reduces the level of vibration experienced by the piping system. Tightening up the clearances at the supports (ideally zero gap) and adding new static supports at the system, are typical ways of increasing the natural frequency of the piping system. However, only stiffening the piping system may not be sufficient to resolve the vibration problem, and in some cases, it might not be feasible to implement it at all, as the available piping layout could create limitations on adding supports due to thermal expansion/contraction requirements. In these cases, utilization of viscous damper supports could be recommended as these devices can allow relatively large quasi-static movement of piping while providing sufficient capabilities of dissipating the vibration. Therefore, when correctly selected and installed, viscous damper supports can provide a significant effect on the response of the piping system over a wide range of frequencies. Viscous dampers cannot be used to support sustained, static loads. This paper shows over a real case example, a methodology which allows to determine the selection of the viscous damper supports via a dynamic analysis model. By implementing this methodology, it is possible to resolve piping vibration problems by adding new viscous dampers supports to the system. The methodology applied on this paper can be used to resolve similar vibration issues.
Keywords: dynamic analysis, flow induced vibration, piping supports, turbulent flow, slug flow, viscous damper
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 312