Search results for: variable precision.
43 Contaminant Transport in Soil from a Point Source
Authors: S. A. Nta, M. J. Ayotamuno, A. H. Igoni, R. N. Okparanma
Abstract:
The work sought to understand the pattern of movement of contaminant from a continuous point source through soil. The soil used was sandy-loam in texture. The contaminant used was municipal solid waste landfill leachate, introduced as a point source through an entry point located at the center of top layer of the soil tank. Analyses were conducted after maturity periods of 50 and 80 days. The maximum change in chemical concentration was observed on soil samples at a radial distance of 0.25 m. Finite element approximation based model was used to assess the future prediction, management and remediation in the polluted area. The actual field data collected for the case study were used to calibrate the modeling and thus simulated the flow pattern of the pollutants through soil. MATLAB R2015a was used to visualize the flow of pollutant through the soil. Dispersion coefficient at 0.25 and 0.50 m radial distance from the point of application of leachate shows a measure of the spreading of a flowing leachate due to the nature of the soil medium, with its interconnected channels distributed at random in all directions. Surface plots of metals on soil after maturity period of 80 days shows a functional relationship between a designated dependent variable (Y), and two independent variables (X and Z). Comparison of measured and predicted profile transport along the depth after 50 and 80 days of leachate application and end of the experiment shows that there were no much difference between the predicted and measured concentrations as they were all lying close to each other. For the analysis of contaminant transport, finite difference approximation based model was very effective in assessing the future prediction, management and remediation in the polluted area. The experiment gave insight into the most likely pattern of movement of contaminant as a result of continuous percolations of the leachate on soil. This is important for contaminant movement prediction and subsequent remediation of such soils.
Keywords: Contaminant, dispersion, point or leaky source, surface plot, soil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 53242 Optimization of the Co-Precipitation of Industrial Waste Metals in a Continuous Reactor System
Authors: Thomas S. Abia II, Citlali Garcia-Saucedo
Abstract:
A continuous copper precipitation treatment (CCPT) system was conceived at Intel Chandler Site to serve as a first-of-kind (FOK) facility-scale waste copper (Cu), nickel (Ni), and manganese (Mn) co-precipitation facility. The process was designed to treat highly variable wastewater discharged from a substrate packaging research factory. The paper discusses metals co-precipitation induced by internal changes for manufacturing facilities that lack the capacity for hardware expansion due to real estate restrictions, aggressive schedules, or budgetary constraints. Herein, operating parameters such as pH and oxidation reduction potential (ORP) were examined to analyze the ability of the CCPT System to immobilize various waste metals. Additionally, influential factors such as influent concentrations and retention times were investigated to quantify the environmental variability against system performance. A total of 2,027 samples were analyzed and statistically evaluated to measure the performance of CCPT that was internally retrofitted for Mn abatement to meet environmental regulations. In order to enhance the consistency of the influent, a separate holding tank was cannibalized from another system to collect and slow-feed the segregated Mn wastewater from the factory into CCPT. As a result, the baseline influent Mn decreased from 17.2+18.7 mg1L-1 at pre-pilot to 5.15+8.11 mg1L-1 post-pilot (70.1% reduction). Likewise, the pre-trial and post-trial average influent Cu values to CCPT were 52.0+54.6 mg1L-1 and 33.9+12.7 mg1L-1, respectively (34.8% reduction). However, the raw Ni content of 0.97+0.39 mg1L-1 at pre-pilot increased to 1.06+0.17 mg1L-1 at post-pilot. The average Mn output declined from 10.9+11.7 mg1L-1 at pre-pilot to 0.44+1.33 mg1L-1 at post-pilot (96.0% reduction) as a result of the pH and ORP operating setpoint changes. In similar fashion, the output Cu quality improved from 1.60+5.38 mg1L-1 to 0.55+1.02 mg1L-1 (65.6% reduction) while the Ni output sustained a 50% enhancement during the pilot study (0.22+0.19 mg1L-1 reduced to 0.11+0.06 mg1L-1). pH and ORP were shown to be significantly instrumental to the precipitative versatility of the CCPT System.
Keywords: Copper, co-precipitation, industrial wastewater treatment, manganese, optimization, pilot study.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 98341 Hazard Contributing Factors Classification for Petrol Fuel Station
Authors: Mirza Munir Ahmed, S.R.M. Kutty, Mohd Faris Khamidi, Idris Othman, Azmi Mohd Shariff
Abstract:
Petrol Fuel Station (PFS) has potential hazards to the people, asset, environment and reputation of an operating company. Fire hazards, static electricity air pollution evoked by aliphatic and aromatic organic compounds are major causes of accident/incident occurrence at fuel station. Activities such as carelessness, maintenance, housekeeping, slips trips and falls, transportation hazard, major and minor injuries, robbery and snake bites has a potential to create unsafe conditions. The level of risk of these hazards varies according to location and country. The emphasis on safety considerations by the government is variable all around the world. Developed countries safety records are much better as compared to developing countries safety statistics. There is no significant approach available to highlight the unsafe acts and unsafe conditions during operation and maintenance of fuel station. Fuel station is the most commonly available facilities that contain flammable and hazardous materials. Due to continuous operation of fuel station they pose various hazards to people, environment and assets of an organization. To control these hazards, there is a need for specific approach. PFS operation is unique as compared to other businesses. For smooth operations it demands an involvement of operating company, contractor and operator group. This study will focus to address hazard contributing factors that have a potential to make PFS operation risky. One year data collected, 902 activities analyzed, comparisons were made to highlight significant contributing factors. The study will provide help and assistance to PFS outlet marketing companies to make their fuel station operation safer. It will help health safety and environment (HSE) professionals to arrest the gap available related to safety matters at PFS.Keywords: Accident, Contributing factors, carelessness, fire, explosion, injuries.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 738340 The Effect of Discontinued Water Spray Cooling on the Heat Transfer Coefficient
Authors: J. Hrabovský, M. Chabičovský, J. Horský
Abstract:
Water spray cooling is a technique typically used in heat treatment and other metallurgical processes where controlled temperature regimes are required. Water spray cooling is used in static (without movement) or dynamic (with movement of the steel plate) regimes. The static regime is notable for the fixed position of the hot steel plate and fixed spray nozzle. This regime is typical for quenching systems focused on heat treatment of the steel plate. The second application of spray cooling is the dynamic regime. The dynamic regime is notable for its static section cooling system and moving steel plate. This regime is used in rolling and finishing mills. The fixed position of cooling sections with nozzles and the movement of the steel plate produce nonhomogeneous water distribution on the steel plate. The length of cooling sections and placement of water nozzles in combination with the nonhomogeneity of water distribution lead to discontinued or interrupted cooling conditions. The impact of static and dynamic regimes on cooling intensity and the heat transfer coefficient during the cooling process of steel plates is an important issue. Heat treatment of steel is accompanied by oxide scale growth. The oxide scale layers can significantly modify the cooling properties and intensity during the cooling. The combination of static and dynamic (section) regimes with the variable thickness of the oxide scale layer on the steel surface impact the final cooling intensity. The study of the influence of the oxide scale layers with different cooling regimes was carried out using experimental measurements and numerical analysis. The experimental measurements compared both types of cooling regimes and the cooling of scale-free surfaces and oxidized surfaces. A numerical analysis was prepared to simulate the cooling process with different conditions of the section and samples with different oxide scale layers.
Keywords: Heat transfer coefficient, numerical analysis, oxide layer, spray cooling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 297839 Multi-Objective Optimization of Run-of-River Small-Hydropower Plants Considering Both Investment Cost and Annual Energy Generation
Authors: Amèdédjihundé H. J. Hounnou, Frédéric Dubas, François-Xavier Fifatin, Didier Chamagne, Antoine Vianou
Abstract:
This paper presents the techno-economic evaluation of run-of-river small-hydropower plants. In this regard, a multi-objective optimization procedure is proposed for the optimal sizing of the hydropower plants, and NSGAII is employed as the optimization algorithm. Annual generated energy and investment cost are considered as the objective functions, and number of generator units (n) and nominal turbine flow rate (QT) constitute the decision variables. Site of Yeripao in Benin is considered as the case study. We have categorized the river of this site using its environmental characteristics: gross head, and first quartile, median, third quartile and mean of flow. Effects of each decision variable on the objective functions are analysed. The results gave Pareto Front which represents the trade-offs between annual energy generation and the investment cost of hydropower plants, as well as the recommended optimal solutions. We noted that with the increase of the annual energy generation, the investment cost rises. Thus, maximizing energy generation is contradictory with minimizing the investment cost. Moreover, we have noted that the solutions of Pareto Front are grouped according to the number of generator units (n). The results also illustrate that the costs per kWh are grouped according to the n and rise with the increase of the nominal turbine flow rate. The lowest investment costs per kWh are obtained for n equal to one and are between 0.065 and 0.180 €/kWh. Following the values of n (equal to 1, 2, 3 or 4), the investment cost and investment cost per kWh increase almost linearly with increasing the nominal turbine flowrate while annual generated. Energy increases logarithmically with increasing of the nominal turbine flowrate. This study made for the Yeripao river can be applied to other rivers with their own characteristics.
Keywords: Hydropower plant, investment cost, multi-objective optimization, number of generator units.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 105738 The Effects of Mountain Biking as Psychomotor Instrument in Physical Education: Balance’s Evaluation
Authors: Péricles Maia Andrade, Temístocles Damasceno Silva, Hector Luiz Rodrigues Munaro
Abstract:
The school physical education is going through several changes over the years, and diversification of its content from specific interests is one of the reasons for these changes, soon, the formality in education do not have to stay out, but needs to open up the possibilities offered by the world, so the Mountain Bike, an adventure sport, offers several opportunities for intervention Its application in the school allows diverse interventions in front of the psychomotor development, besides opening possibilities for other contents, respecting the previous experiences of the students in their common environment. The choice of theme was due to affinity with the practice and experience of the Mountain Bike at different levels. Both competitive as recreational, professional standard and amateur, focus as principle the bases of the Cycling, coupled with the inclusion in the Centre for Studies in Management of Sport and Leisure and of the Southwest Bahia State University and the preview of the modality's potential to help the children’s psychomotor development. The goal of this research was to demonstrate like a pilot project the effects of the Mountain Bike as psychomotor instrument in physical education at one of the psychomotor valences, Balance, evaluating Immobility, Static Balance and Dynamic Balance. The methodology used Fonseca’s Psychomotor Battery in 10 students (n=10) of a brazilian public primary’s school, with ages between 9 and 11 years old to use the Mountain Biking contents. The balance’s skills dichotomized in Regular and Good. Regarding the variable Immobility, in the initial test, regardless of gender, 70% (n = 7) were considered Regular. After four months of activity, the Good profile, which had only 30% (n = 3) of the sample, evolved to 60% (n = 6). As in Static and Dynamic Balance there was an increase of 30% (n = 3) and 50% (n = 5) respectively for Good. Between genders, female evolution was better for Good in Immobility and in Static Equilibrium. Already the male evolution was better observed in the Dynamic Equilibrium, with 66.7% (n = 4) for Good. Respecting the particularities of the motor development, an indication of the positive effects of the MTB for the evolution in the balance perceived, necessitating studies with greater sampling.
Keywords: Psychomotricity, balance, mountain biking, education.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 93637 Offline Parameter Identification and State-of-Charge Estimation for Healthy and Aged Electric Vehicle Batteries Based on the Combined Model
Authors: Xiaowei Zhang, Min Xu, Saeid Habibi, Fengjun Yan, Ryan Ahmed
Abstract:
Recently, Electric Vehicles (EVs) have received extensive consideration since they offer a more sustainable and greener transportation alternative compared to fossil-fuel propelled vehicles. Lithium-Ion (Li-ion) batteries are increasingly being deployed in EVs because of their high energy density, high cell-level voltage, and low rate of self-discharge. Since Li-ion batteries represent the most expensive component in the EV powertrain, accurate monitoring and control strategies must be executed to ensure their prolonged lifespan. The Battery Management System (BMS) has to accurately estimate parameters such as the battery State-of-Charge (SOC), State-of-Health (SOH), and Remaining Useful Life (RUL). In order for the BMS to estimate these parameters, an accurate and control-oriented battery model has to work collaboratively with a robust state and parameter estimation strategy. Since battery physical parameters, such as the internal resistance and diffusion coefficient change depending on the battery state-of-life (SOL), the BMS has to be adaptive to accommodate for this change. In this paper, an extensive battery aging study has been conducted over 12-months period on 5.4 Ah, 3.7 V Lithium polymer cells. Instead of using fixed charging/discharging aging cycles at fixed C-rate, a set of real-world driving scenarios have been used to age the cells. The test has been interrupted every 5% capacity degradation by a set of reference performance tests to assess the battery degradation and track model parameters. As battery ages, the combined model parameters are optimized and tracked in an offline mode over the entire batteries lifespan. Based on the optimized model, a state and parameter estimation strategy based on the Extended Kalman Filter (EKF) and the relatively new Smooth Variable Structure Filter (SVSF) have been applied to estimate the SOC at various states of life.
Keywords: Lithium-Ion batteries, genetic algorithm optimization, battery aging test, and parameter identification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 154536 Engineering Photodynamic with Radioactive Therapeutic Systems for Sustainable Molecular Polarity: Autopoiesis Systems
Authors: Moustafa Osman Mohammed
Abstract:
This paper introduces Luhmann’s autopoietic social systems starting with the original concept of autopoiesis by biologists and scientists, including the modification of general systems based on socialized medicine. A specific type of autopoietic system is explained in the three existing groups of the ecological phenomena: interaction, social and medical sciences. This hypothesis model, nevertheless, has a nonlinear interaction with its natural environment ‘interactional cycle’ for the exchange of photon energy with molecular without any changes in topology. The external forces in the systems environment might be concomitant with the natural fluctuations’ influence (e.g. radioactive radiation, electromagnetic waves). The cantilever sensor deploys insights to the future chip processor for prevention of social metabolic systems. Thus, the circuits with resonant electric and optical properties are prototyped on board as an intra–chip inter–chip transmission for producing electromagnetic energy approximately ranges from 1.7 mA at 3.3 V to service the detection in locomotion with the least significant power losses. Nowadays, therapeutic systems are assimilated materials from embryonic stem cells to aggregate multiple functions of the vessels nature de-cellular structure for replenishment. While, the interior actuators deploy base-pair complementarity of nucleotides for the symmetric arrangement in particular bacterial nanonetworks of the sequence cycle creating double-stranded DNA strings. The DNA strands must be sequenced, assembled, and decoded in order to reconstruct the original source reliably. The design of exterior actuators have the ability in sensing different variations in the corresponding patterns regarding beat-to-beat heart rate variability (HRV) for spatial autocorrelation of molecular communication, which consists of human electromagnetic, piezoelectric, electrostatic and electrothermal energy to monitor and transfer the dynamic changes of all the cantilevers simultaneously in real-time workspace with high precision. A prototype-enabled dynamic energy sensor has been investigated in the laboratory for inclusion of nanoscale devices in the architecture with a fuzzy logic control for detection of thermal and electrostatic changes with optoelectronic devices to interpret uncertainty associated with signal interference. Ultimately, the controversial aspect of molecular frictional properties is adjusted to each other and forms its unique spatial structure modules for providing the environment mutual contribution in the investigation of mass temperature changes due to pathogenic archival architecture of clusters.
Keywords: Autopoiesis, quantum photonics, portable energy, photonic structure, photodynamic therapeutic system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 88635 Using Business Intelligence Capabilities to Improve the Quality of Decision-Making: A Case Study of Mellat Bank
Authors: Jalal Haghighat Monfared, Zahra Akbari
Abstract:
Today, business executives need to have useful information to make better decisions. Banks have also been using information tools so that they can direct the decision-making process in order to achieve their desired goals by rapidly extracting information from sources with the help of business intelligence. The research seeks to investigate whether there is a relationship between the quality of decision making and the business intelligence capabilities of Mellat Bank. Each of the factors studied is divided into several components, and these and their relationships are measured by a questionnaire. The statistical population of this study consists of all managers and experts of Mellat Bank's General Departments (including 190 people) who use commercial intelligence reports. The sample size of this study was 123 randomly determined by statistical method. In this research, relevant statistical inference has been used for data analysis and hypothesis testing. In the first stage, using the Kolmogorov-Smirnov test, the normalization of the data was investigated and in the next stage, the construct validity of both variables and their resulting indexes were verified using confirmatory factor analysis. Finally, using the structural equation modeling and Pearson's correlation coefficient, the research hypotheses were tested. The results confirmed the existence of a positive relationship between decision quality and business intelligence capabilities in Mellat Bank. Among the various capabilities, including data quality, correlation with other systems, user access, flexibility and risk management support, the flexibility of the business intelligence system was the most correlated with the dependent variable of the present research. This shows that it is necessary for Mellat Bank to pay more attention to choose the required business intelligence systems with high flexibility in terms of the ability to submit custom formatted reports. Subsequently, the quality of data on business intelligence systems showed the strongest relationship with quality of decision making. Therefore, improving the quality of data, including the source of data internally or externally, the type of data in quantitative or qualitative terms, the credibility of the data and perceptions of who uses the business intelligence system, improves the quality of decision making in Mellat Bank.
Keywords: Business intelligence, business intelligence capability, decision making, decision quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 138134 Matrix-Based Linear Analysis of Switched Reluctance Generator with Optimum Pole Angles Determination
Authors: Walid A. M. Ghoneim, Hamdy A. Ashour, Asmaa E. Abdo
Abstract:
In this paper, linear analysis of a Switched Reluctance Generator (SRG) model is applied on the most common configurations (4/2, 6/4 and 8/6) for both conventional short-pitched and fully-pitched designs, in order to determine the optimum stator/rotor pole angles at which the maximum output voltage is generated per unit excitation current. This study is focused on SRG analysis and design as a proposed solution for renewable energy applications, such as wind energy conversion systems. The world’s potential to develop the renewable energy technologies through dedicated scientific researches was the motive behind this study due to its positive impact on economy and environment. In addition, the problem of rare earth metals (Permanent magnet) caused by mining limitations, banned export by top producers and environment restrictions leads to the unavailability of materials used for rotating machines manufacturing. This challenge gave authors the opportunity to study, analyze and determine the optimum design of the SRG that has the benefit to be free from permanent magnets, rotor windings, with flexible control system and compatible with any application that requires variable-speed operation. In addition, SRG has been proved to be very efficient and reliable in both low-speed or high-speed applications. Linear analysis was performed using MATLAB simulations based on the (Modified generalized matrix approach) of Switched Reluctance Machine (SRM). About 90 different pole angles combinations and excitation patterns were simulated through this study, and the optimum output results for each case were recorded and presented in detail. This procedure has been proved to be applicable for any SRG configuration, dimension and excitation pattern. The delivered results of this study provide evidence for using the 4-phase 8/6 fully pitched SRG as the main optimum configuration for the same machine dimensions at the same angular speed.
Keywords: Generalized matrix approach, linear analysis, renewable applications, switched reluctance generator, SRG.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 60733 Application of Gamma Frailty Model in Survival of Liver Cirrhosis Patients
Authors: Elnaz Saeedi, Jamileh Abolaghasemi, Mohsen Nasiri Tousi, Saeedeh Khosravi
Abstract:
Goals and Objectives: A typical analysis of survival data involves the modeling of time-to-event data, such as the time till death. A frailty model is a random effect model for time-to-event data, where the random effect has a multiplicative influence on the baseline hazard function. This article aims to investigate the use of gamma frailty model with concomitant variable in order to individualize the prognostic factors that influence the liver cirrhosis patients’ survival times. Methods: During the one-year study period (May 2008-May 2009), data have been used from the recorded information of patients with liver cirrhosis who were scheduled for liver transplantation and were followed up for at least seven years in Imam Khomeini Hospital in Iran. In order to determine the effective factors for cirrhotic patients’ survival in the presence of latent variables, the gamma frailty distribution has been applied. In this article, it was considering the parametric model, such as Exponential and Weibull distributions for survival time. Data analysis is performed using R software, and the error level of 0.05 was considered for all tests. Results: 305 patients with liver cirrhosis including 180 (59%) men and 125 (41%) women were studied. The age average of patients was 39.8 years. At the end of the study, 82 (26%) patients died, among them 48 (58%) were men and 34 (42%) women. The main cause of liver cirrhosis was found hepatitis 'B' with 23%, followed by cryptogenic with 22.6% were identified as the second factor. Generally, 7-year’s survival was 28.44 months, for dead patients and for censoring was 19.33 and 31.79 months, respectively. Using multi-parametric survival models of progressive and regressive, Exponential and Weibull models with regard to the gamma frailty distribution were fitted to the cirrhosis data. In both models, factors including, age, bilirubin serum, albumin serum, and encephalopathy had a significant effect on survival time of cirrhotic patients. Conclusion: To investigate the effective factors for the time of patients’ death with liver cirrhosis in the presence of latent variables, gamma frailty model with parametric distributions seems desirable.
Keywords: Frailty model, latent variables, liver cirrhosis, parametric distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 105832 Simulation and Parameterization by the Finite Element Method of a C Shape Delectromagnet for Application in the Characterization of Magnetic Properties of Materials
Authors: A. A Velásquez, J.Baena
Abstract:
This article presents the simulation, parameterization and optimization of an electromagnet with the C–shaped configuration, intended for the study of magnetic properties of materials. The electromagnet studied consists of a C-shaped yoke, which provides self–shielding for minimizing losses of magnetic flux density, two poles of high magnetic permeability and power coils wound on the poles. The main physical variable studied was the static magnetic flux density in a column within the gap between the poles, with 4cm2 of square cross section and a length of 5cm, seeking a suitable set of parameters that allow us to achieve a uniform magnetic flux density of 1x104 Gaussor values above this in the column, when the system operates at room temperature and with a current consumption not exceeding 5A. By means of a magnetostatic analysis by the finite element method, the magnetic flux density and the distribution of the magnetic field lines were visualized and quantified. From the results obtained by simulating an initial configuration of electromagnet, a structural optimization of the geometry of the adjustable caps for the ends of the poles was performed. The magnetic permeability effect of the soft magnetic materials used in the poles system, such as low– carbon steel (0.08% C), Permalloy (45% Ni, 54.7% Fe) and Mumetal (21.2% Fe, 78.5% Ni), was also evaluated. The intensity and uniformity of the magnetic field in the gap showed a high dependence with the factors described above. The magnetic field achieved in the column was uniform and its magnitude ranged between 1.5x104 Gauss and 1.9x104 Gauss according to the material of the pole used, with the possibility of increasing the magnetic field by choosing a suitable geometry of the cap, introducing a cooling system for the coils and adjusting the spacing between the poles. This makes the device a versatile and scalable tool to generate the magnetic field necessary to perform magnetic characterization of materials by techniques such as vibrating sample magnetometry (VSM), Hall-effect, Kerr-effect magnetometry, among others. Additionally, a CAD design of the modules of the electromagnet is presented in order to facilitate the construction and scaling of the physical device.
Keywords: Electromagnet, Finite Elements Method, Magnetostatic, Magnetometry, Modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 192931 The Estimation Method of Stress Distribution for Beam Structures Using the Terrestrial Laser Scanning
Authors: Sang Wook Park, Jun Su Park, Byung Kwan Oh, Yousok Kim, Hyo Seon Park
Abstract:
This study suggests the estimation method of stress distribution for the beam structures based on TLS (Terrestrial Laser Scanning). The main components of method are the creation of the lattices of raw data from TLS to satisfy the suitable condition and application of CSSI (Cubic Smoothing Spline Interpolation) for estimating stress distribution. Estimation of stress distribution for the structural member or the whole structure is one of the important factors for safety evaluation of the structure. Existing sensors which include ESG (Electric strain gauge) and LVDT (Linear Variable Differential Transformer) can be categorized as contact type sensor which should be installed on the structural members and also there are various limitations such as the need of separate space where the network cables are installed and the difficulty of access for sensor installation in real buildings. To overcome these problems inherent in the contact type sensors, TLS system of LiDAR (light detection and ranging), which can measure the displacement of a target in a long range without the influence of surrounding environment and also get the whole shape of the structure, has been applied to the field of structural health monitoring. The important characteristic of TLS measuring is a formation of point clouds which has many points including the local coordinate. Point clouds are not linear distribution but dispersed shape. Thus, to analyze point clouds, the interpolation is needed vitally. Through formation of averaged lattices and CSSI for the raw data, the method which can estimate the displacement of simple beam was developed. Also, the developed method can be extended to calculate the strain and finally applicable to estimate a stress distribution of a structural member. To verify the validity of the method, the loading test on a simple beam was conducted and TLS measured it. Through a comparison of the estimated stress and reference stress, the validity of the method is confirmed.Keywords: Structural health monitoring, terrestrial laser scanning, estimation of stress distribution, coordinate transformation, cubic smoothing spline interpolation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 274330 Monetary Evaluation of Dispatching Decisions in Consideration of Mode Choice Models
Authors: Marcel Schneider, Nils Nießen
Abstract:
Microscopic simulation tool kits allow for consideration of the two processes of railway operations and the previous timetable production. Block occupation conflicts on both process levels are often solved by using defined train priorities. These conflict resolutions (dispatching decisions) generate reactionary delays to the involved trains. The sum of reactionary delays is commonly used to evaluate the quality of railway operations, which describes the timetable robustness. It is either compared to an acceptable train performance or the delays are appraised economically by linear monetary functions. It is impossible to adequately evaluate dispatching decisions without a well-founded objective function. This paper presents a new approach for the evaluation of dispatching decisions. The approach uses mode choice models and considers the behaviour of the end-customers. These models evaluate the reactionary delays in more detail and consider other competing modes of transport. The new approach pursues the coupling of a microscopic model of railway operations with the macroscopic choice mode model. At first, it will be implemented for railway operations process but it can also be used for timetable production. The evaluation considers the possibility for the customer to interchange to other transport modes. The new approach starts to look at rail and road, but it can also be extended to air travel. The result of mode choice models is the modal split. The reactions by the end-customers have an impact on the revenue of the train operating companies. Different purposes of travel have different payment reserves and tolerances towards late running. Aside from changes to revenues, longer journey times can also generate additional costs. The costs are either time- or track-specific and arise from required changes to rolling stock or train crew cycles. Only the variable values are summarised in the contribution margin, which is the base for the monetary evaluation of delays. The contribution margin is calculated for different possible solutions to the same conflict. The conflict resolution is optimised until the monetary loss becomes minimal. The iterative process therefore determines an optimum conflict resolution by monitoring the change to the contribution margin. Furthermore, a monetary value of each dispatching decision can also be derived.Keywords: Choice of mode, monetary evaluation, railway operations, reactionary delays.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 148129 Parametric Non-Linear Analysis of Reinforced Concrete Frames with Supplemental Damping Systems
Authors: Daniele Losanno, Giorgio Serino
Abstract:
This paper focuses on parametric analysis of reinforced concrete structures equipped with supplemental damping braces. Practitioners still luck sufficient data for current design of damper added structures and often reduce the real model to a pure damper braced structure even if this assumption is neither realistic nor conservative. In the present study, the damping brace is modelled as made by a linear supporting brace connected in series with the viscous/hysteretic damper. Deformation capacity of existing structures is usually not adequate to undergo the design earthquake. In spite of this, additional dampers could be introduced strongly limiting structural damage to acceptable values, or in some cases, reducing frame response to elastic behavior. This work is aimed at providing useful considerations for retrofit of existing buildings by means of supplemental damping braces. The study explicitly takes into consideration variability of (a) relative frame to supporting brace stiffness, (b) dampers’ coefficient (viscous coefficient or yielding force) and (c) non-linear frame behavior. Non-linear time history analysis has been run to account for both dampers’ behavior and non-linear plastic hinges modelled by Pivot hysteretic type. Parametric analysis based on previous studies on SDOF or MDOF linear frames provide reference values for nearly optimal damping systems design. With respect to bare frame configuration, seismic response of the damper-added frame is strongly improved, limiting deformations to acceptable values far below ultimate capacity. Results of the analysis also demonstrated the beneficial effect of stiffer supporting braces, thus highlighting inadequacy of simplified pure damper models. At the same time, the effect of variable damping coefficient and yielding force has to be treated as an optimization problem.
Keywords: Brace stiffness, dissipative braces, non-linear analysis, plastic hinges, reinforced concrete.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 91328 Discrepant Views of Social Competence and Links with Social Phobia
Authors: Pamela-Zoe Topalli, Niina Junttila, Päivi M. Niemi, Klaus Ranta
Abstract:
Adolescents’ biased perceptions about their social competence (SC), whether negatively or positively, serve to influence their socioemotional adjustment such as early feelings of social phobia (nowadays referred to as Social Anxiety Disorder-SAD). Despite the importance of biased self-perceptions in adolescents’ psychosocial adjustment, the extent to which discrepancies between self- and others’ evaluations of one’s SC are linked to social phobic symptoms remains unclear in the literature. This study examined the perceptual discrepancy profiles between self- and peers’ as well as between self- and teachers’ evaluations of adolescents’ SC and the interrelations of these profiles with self-reported social phobic symptoms. The participants were 390 3rd graders (15 years old) of Finnish lower secondary school (50.8% boys, 49.2% girls). In contrast with variable-centered approaches that have mainly been used by previous studies when focusing on this subject, this study used latent profile analysis (LPA), a person-centered approach which can provide information regarding risk profiles by capturing the heterogeneity within a population and classifying individuals into groups. LPA revealed the following five classes of discrepancy profiles: i) extremely negatively biased perceptions of SC, ii) negatively biased perceptions of SC, iii) quite realistic perceptions of SC, iv) positively biased perceptions of SC, and v) extremely positively biased perceptions of SC. Adolescents with extremely negatively biased perceptions and negatively biased perceptions of their own SC reported the highest number of social phobic symptoms. Adolescents with quite realistic, positively biased and extremely positively biased perceptions reported the lowest number of socio-phobic symptoms. The results point out the negatively and the extremely negatively biased perceptions as possible contributors to social phobic symptoms. Moreover, the association of quite realistic perceptions with low number of social phobic symptoms indicates its potential protective power against social phobia. Finally, positively and extremely positively biased perceptions of SC are negatively associated with social phobic symptoms in this study. However, the profile of extremely positively biased perceptions might be linked as well with the existence of externalizing problems such as antisocial behavior (e.g. disruptive impulsivity). The current findings highlight the importance of considering discrepancies between self- and others’ perceptions of one’s SC in clinical and research efforts. Interventions designed to prevent or moderate social phobic symptoms need to take into account individual needs rather than aiming for uniform treatment. Implications and future directions are discussed.
Keywords: Adolescence, latent profile analysis, perceptual discrepancies, social competence, social phobia.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 90227 Modelling and Control of Milk Fermentation Process in Biochemical Reactor
Authors: Jožef Ritonja
Abstract:
The biochemical industry is one of the most important modern industries. Biochemical reactors are crucial devices of the biochemical industry. The essential bioprocess carried out in bioreactors is the fermentation process. A thorough insight into the fermentation process and the knowledge how to control it are essential for effective use of bioreactors to produce high quality and quantitatively enough products. The development of the control system starts with the determination of a mathematical model that describes the steady state and dynamic properties of the controlled plant satisfactorily, and is suitable for the development of the control system. The paper analyses the fermentation process in bioreactors thoroughly, using existing mathematical models. Most existing mathematical models do not allow the design of a control system for controlling the fermentation process in batch bioreactors. Due to this, a mathematical model was developed and presented that allows the development of a control system for batch bioreactors. Based on the developed mathematical model, a control system was designed to ensure optimal response of the biochemical quantities in the fermentation process. Due to the time-varying and non-linear nature of the controlled plant, the conventional control system with a proportional-integral-differential controller with constant parameters does not provide the desired transient response. The improved adaptive control system was proposed to improve the dynamics of the fermentation. The use of the adaptive control is suggested because the parameters’ variations of the fermentation process are very slow. The developed control system was tested to produce dairy products in the laboratory bioreactor. A carbon dioxide concentration was chosen as the controlled variable. The carbon dioxide concentration correlates well with the other, for the quality of the fermentation process in significant quantities. The level of the carbon dioxide concentration gives important information about the fermentation process. The obtained results showed that the designed control system provides minimum error between reference and actual values of carbon dioxide concentration during a transient response and in a steady state. The recommended control system makes reference signal tracking much more efficient than the currently used conventional control systems which are based on linear control theory. The proposed control system represents a very effective solution for the improvement of the milk fermentation process.Keywords: Bioprocess engineering, biochemical reactor, fermentation process, modeling, adaptive control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 148226 Low Energy Technology for Leachate Valorisation
Authors: Jesús M. Martín, Francisco Corona, Dolores Hidalgo
Abstract:
Landfills present long-term threats to soil, air, groundwater and surface water due to the formation of greenhouse gases (methane gas and carbon dioxide) and leachate from decomposing garbage. The composition of leachate differs from site to site and also within the landfill. The leachates alter with time (from weeks to years) since the landfilled waste is biologically highly active and their composition varies. Mainly, the composition of the leachate depends on factors such as characteristics of the waste, the moisture content, climatic conditions, degree of compaction and the age of the landfill. Therefore, the leachate composition cannot be generalized and the traditional treatment models should be adapted in each case. Although leachate composition is highly variable, what different leachates have in common is hazardous constituents and their potential eco-toxicological effects on human health and on terrestrial ecosystems. Since leachate has distinct compositions, each landfill or dumping site would represent a different type of risk on its environment. Nevertheless, leachates consist always of high organic concentration, conductivity, heavy metals and ammonia nitrogen. Leachate could affect the current and future quality of water bodies due to uncontrolled infiltrations. Therefore, control and treatment of leachate is one of the biggest issues in urban solid waste treatment plants and landfills design and management. This work presents a treatment model that will be carried out "in-situ" using a cost-effective novel technology that combines solar evaporation/condensation plus forward osmosis. The plant is powered by renewable energies (solar energy, biomass and residual heat), which will minimize the carbon footprint of the process. The final effluent quality is very high, allowing reuse (preferred) or discharge into watercourses. In the particular case of this work, the final effluents will be reused for cleaning and gardening purposes. A minority semi-solid residual stream is also generated in the process. Due to its special composition (rich in metals and inorganic elements), this stream will be valorized in ceramic industries to improve the final products characteristics.
Keywords: Forward osmosis, landfills, leachate valorization, solar evaporation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 95525 The Impact of Supply Chain Strategy and Integration on Supply Chain Performance: Supply Chain Vulnerability as a Moderator
Authors: Yi-Chun Kuo, Jo-Chieh Lin
Abstract:
The objective of a supply chain strategy is to reduce waste and increase efficiency to attain cost benefits, and to guarantee supply chain flexibility when facing the ever-changing market environment in order to meet customer requirements. Strategy implementation aims to fulfill common goals and attain benefits by integrating upstream and downstream enterprises, sharing information, conducting common planning, and taking part in decision making, so as to enhance the overall performance of the supply chain. With the rise of outsourcing and globalization, the increasing dependence on suppliers and customers and the rapid development of information technology, the complexity and uncertainty of the supply chain have intensified, and supply chain vulnerability has surged, resulting in adverse effects on supply chain performance. Thus, this study aims to use supply chain vulnerability as a moderating variable and apply structural equation modeling (SEM) to determine the relationships among supply chain strategy, supply chain integration, and supply chain performance, as well as the moderating effect of supply chain vulnerability on supply chain performance. The data investigation of this study was questionnaires which were collected from the management level of enterprises in Taiwan and China, 149 questionnaires were received. The result of confirmatory factor analysis shows that the path coefficients of supply chain strategy on supply chain integration and supply chain performance are positive (0.497, t= 4.914; 0.748, t= 5.919), having a significantly positive effect. Supply chain integration is also significantly positively correlated to supply chain performance (0.192, t = 2.273). The moderating effects of supply chain vulnerability on supply chain strategy and supply chain integration to supply chain performance are significant (7.407; 4.687). In Taiwan, 97.73% of enterprises are small- and medium-sized enterprises (SMEs) focusing on receiving original equipment manufacturer (OEM) and original design manufacturer (ODM) orders. In order to meet the needs of customers and to respond to market changes, these enterprises especially focus on supply chain flexibility and their integration with the upstream and downstream enterprises. According to the observation of this research, the effect of supply chain vulnerability on supply chain performance is significant, and so enterprises need to attach great importance to the management of supply chain risk and conduct risk analysis on their suppliers in order to formulate response strategies when facing emergency situations. At the same time, risk management is incorporated into the supply chain so as to reduce the effect of supply chain vulnerability on the overall supply chain performance.
Keywords: Supply chain integration, supply chain performance, supply chain vulnerability, structural equation modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 90024 A Concept Study to Assist Non-Profit Organizations to Better Target Developing Countries
Authors: Malek Makki
Abstract:
The main purpose of this research study is to assist non-profit organizations (NPOs) to better segment a group of least developing countries and to optimally target the most needier areas, so that the provided aids make positive and lasting differences. We applied international marketing and strategy approaches to segment a sub-group of candidates among a group of 151 countries identified by the UN-G77 list, and furthermore, we point out the areas of priorities. We use reliable and well known criteria on the basis of economics, geography, demography and behavioral. These criteria can be objectively estimated and updated so that a follow-up can be performed to measure the outcomes of any program. We selected 12 socio-economic criteria that complement each other: GDP per capita, GDP growth, industry value added, export per capita, fragile state index, corruption perceived index, environment protection index, ease of doing business index, global competitiveness index, Internet use, public spending on education, and employment rate. A weight was attributed to each variable to highlight the relative importance of each criterion within the country. Care was taken to collect the most recent available data from trusted well-known international organizations (IMF, WB, WEF, and WTO). Construct of equivalence was carried out to compare the same variables across countries. The combination of all these weighted estimated criteria provides us with a global index that represents the level of development per country. An absolute index that combines wars and risks was introduced to exclude or include a country on the basis of conflicts and a collapsing state. The final step applied to the included countries consists of a benchmarking method to select the segment of countries and the percentile of each criterion. The results of this study allowed us to exclude 16 countries for risks and security. We also excluded four countries because they lack reliable and complete data. The other countries were classified per percentile thru their global index, and we identified the needier and the areas where aids are highly required to help any NPO to prioritize the area of implementation. This new concept is based on defined, actionable, accessible and accurate variables by which NPO can implement their program and it can be extended to profit companies to perform their corporate social responsibility acts.
Keywords: Developing countries, International marketing, non-profit organization, segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 99023 Landowers' Participation Behavior on the Payment for Environmental Service (PES): Evidences from Taiwan
Authors: Wan-Yu Liu
Abstract:
To respond to the Kyoto Protocol, the policy of Payment for Environmental Service (PES), which was entitled “Plain Landscape Afforestation Program (PLAP)", was certified by Executive Yuan in Taiwan on 31 August 2001 and has been implementing for six years since 1 January 2002. Although the PLAP has received a lot of positive comments, there are still many difficulties during the process of implementation, such as insufficient technology for afforestation, private landowners- low interests in participating in PLAP, insufficient subsidies, and so on, which are potential threats that hinder the PLAP from moving forward in future. In this paper, selecting Ping-Tung County in Taiwan as a sample region and targeting those private landowners with and without intention to participate in the PLAP, respectively, we conduct an empirical analysis based on the Logit model to investigate the factors that determine whether those private landowners join the PLAP, so as to realize the incentive effects of the PLAP upon the personal decision on afforestation. The possible factors that might determine private landowner-s participation in the PLAP include landowner-s characteristics, cropland characteristics, as well as policy factors. Among them, the policy factors include afforestation subsidy amount (+), duration of afforestation subsidy (+), the rules on adjoining and adjacent areas (+), and so on, which do not reach the remarkable level in statistics though, but the directions of variable signs are consistent with the intuition behind the policy. As for the landowners- characteristics, each of age (+), education level (–), and annual household income (+) variables reaches 10% of the remarkable level in statistics; as for the cropland characteristics, each of cropland area (+), cropland price (–), and the number of cropland parcels (–) reaches 1% of the remarkable level in statistics. In light of the above, the cropland characteristics are the dominate factor that determines the probability of landowner-s participation in the PLAP. In the Logit model established by this paper, the probability of correctly estimating nonparticipants is 98%, the probability of correctly estimating the participants is 71.8%, and the probability for the overall estimation is 95%. In addition, Hosmer-Lemeshow test and omnibus test also revealed that the Logit model in this paper may provide fine goodness of fit and good predictive power in forecasting private landowners- participation in this program. The empirical result of this paper expects to help the implementation of the afforestation programs in Taiwan.
Keywords: Forestry policy, logit, afforestation subsidy, afforestation policy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 160522 Development of Mechanisms of Value Creation and Risk Management Organization in the Conditions of Transformation of the Economy of Russia
Authors: Mikhail V. Khachaturyan, Inga A. Koryagina, Eugenia V. Klicheva
Abstract:
In modern conditions, scientific judgment of problems in developing mechanisms of value creation and risk management acquires special relevance. Formation of economic knowledge has resulted in the constant analysis of consumer behavior for all players from national and world markets. Effective mechanisms development of the demand analysis, crucial for consumer's characteristics of future production, and the risks connected with the development of this production are the main objectives of control systems in modern conditions. The modern period of economic development is characterized by a high level of globalization of business and rigidity of competition. At the same time, the considerable share of new products and services costs has a non-material intellectual nature. The most successful in Russia is the contemporary development of small innovative firms. Such firms, through their unique technologies and new approaches to process management, which form the basis of their intellectual capital, can show flexibility and succeed in the market. As a rule, such enterprises should have very variable structure excluding the tough scheme of submission and demanding essentially new incentives for inclusion of personnel in innovative activity. Realization of similar structures, as well as a new approach to management, can be constructed based on value-oriented management which is directed to gradual change of consciousness of personnel and formation from groups of adherents included in the solution of the general innovative tasks. At the same time, valuable changes can gradually capture not only innovative firm staff, but also the structure of its corporate partners. Introduction of new technologies is the significant factor contributing to the development of new valuable imperatives and acceleration of the changing values systems of the organization. It relates to the fact that new technologies change the internal environment of the organization in a way that the old system of values becomes inefficient in new conditions. Introduction of new technologies often demands change in the structure of employee’s interaction and training in their new principles of work. During the introduction of new technologies and the accompanying change in the value system, the structure of the management of the values of the organization is changing. This is due to the need to attract more staff to justify and consolidate the new value system and bring their view into the motivational potential of the new value system of the organization.
Keywords: Value, risk, creation, problems, organization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 97621 Additive Friction Stir Manufacturing Process: Interest in Understanding Thermal Phenomena and Numerical Modeling of the Temperature Rise Phase
Authors: A. Lauvray, F. Poulhaon, P. Michaud, P. Joyot, E. Duc
Abstract:
Additive Friction Stir Manufacturing, or AFSM, is a new industrial process that follows the emergence of friction-based processes. The AFSM process is a solid-state additive process using the energy produced by the friction at the interface between a rotating non-consumable tool and a substrate. Friction depends on various parameters like axial force, rotation speed or friction coefficient. The feeder material is a metallic rod that flows through a hole in the tool. There is still a lack in understanding of the physical phenomena taking place during the process. This research aims at a better AFSM process understanding and implementation, thanks to numerical simulation and experimental validation performed on a prototype effector. Such an approach is considered a promising way for studying the influence of the process parameters and to finally identify a process window that seems relevant. The deposition of material through the AFSM process takes place in several phases. In chronological order these phases are the docking phase, the dwell time phase, the deposition phase, and the removal phase. The present work focuses on the dwell time phase that enables the temperature rise of the system due to pure friction. An analytic modeling of heat generation based on friction considers as main parameters the rotational speed and the contact pressure. Another parameter considered influential is the friction coefficient assumed to be variable, due to the self-lubrication of the system with the rise in temperature or the materials in contact roughness smoothing over time. This study proposes through a numerical modeling followed by an experimental validation to question the influence of the various input parameters on the dwell time phase. Rotation speed, temperature, spindle torque and axial force are the main monitored parameters during experimentations and serve as reference data for the calibration of the numerical model. This research shows that the geometry of the tool as well as fluctuations of the input parameters like axial force and rotational speed are very influential on the temperature reached and/or the time required to reach the targeted temperature. The main outcome is the prediction of a process window which is a key result for a more efficient process implementation.
Keywords: numerical model, additive manufacturing, frictional heat generation, process
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 51620 Exercise and Cognitive Function: Time Course of the Effects
Authors: Simon B. Cooper, Stephan Bandelow, Maria L. Nute, John G. Morris, Mary E. Nevill
Abstract:
Previous research has indicated a variable effect of exercise on adolescents’ cognitive function. However, comparisons between studies are difficult to make due to differences in: the mode, intensity and duration of exercise employed; the components of cognitive function measured (and the tests used to assess them); and the timing of the cognitive function tests in relation to the exercise. Therefore, the aim of the present study was to assess the time course (10 and 60min post-exercise) of the effects of 15min intermittent exercise on cognitive function in adolescents. 45 adolescents were recruited to participate in the study and completed two main trials (exercise and resting) in a counterbalanced crossover design. Participants completed 15min of intermittent exercise (in cycles of 1 min exercise, 30s rest). A battery of computer based cognitive function tests (Stroop test, Sternberg paradigm and visual search test) were completed 30 min pre- and 10 and 60min post-exercise (to assess attention, working memory and perception respectively).The findings of the present study indicate that on the baseline level of the Stroop test, 10min following exercise response times were slower than at any other time point on either trial (trial by session time interaction, p = 0.0308). However, this slowing of responses also tended to produce enhanced accuracy 10min post-exercise on the baseline level of the Stroop test (trial by session time interaction, p = 0.0780). Similarly, on the complex level of the visual search test there was a slowing of response times 10 min post-exercise (trial by session time interaction, p = 0.0199). However, this was not coupled with an improvement in accuracy (trial by session time interaction, p = 0.2349). The mid-morning bout of exercise did not affect response times or accuracy across the morning on the Sternberg paradigm. In conclusion, the findings of the present study suggest an equivocal effect of exercise on adolescents' cognitive function. The mid-morning bout of exercise appears to cause a speed-accuracy trade off immediately following exercise on the Stroop test (participants become slower but more accurate), whilst slowing response times on the visual search test and having no effect on performance on the Sternberg paradigm. Furthermore, this work highlights the importance of the timing of the cognitive function tests relative to the exercise and the components of cognitive function examined in future studies.
Keywords: Adolescents, cognitive function, exercise.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 313719 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings
Authors: G. Candel, D. Naccache
Abstract:
t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embedding. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic, and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n2) to O(n2/k), and the memory requirement from n2 to 2(n/k)2 which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.
Keywords: Concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 48918 Suicide Wrongful Death: Standard of Care Problems Involving the Inaccurate Discernment of Lethal Risk When Focusing on the Elicitation of Suicide Ideation
Authors: Bill D. Geis, Frederick Newman
Abstract:
Suicide and wrongful death forensic cases are the fastest rising tort in mental health law. Most suicide-related personal injury claims fall into the legal category of “wrongful death.” Though mental health experts may be called on to address a range of forensic questions in wrongful death cases, the central consultation that most experts provide is about the negligence element—specifically, the issue of whether the clinician met the clinical standard of care in assessing, treating, and managing the deceased person’s mental health care. Standards of care, varying from US state to state, are broad and address what a reasonable clinician might do in a similar circumstance. This fact leaves the issue of the suicide standard of care, in each case, up to forensic experts to put forth a reasoned estimate of what the standard of care should have been in the specific case under litigation. Because the general state guidelines for standard of care are broad, forensic experts are readily retained to provide scientific and clinical opinions about whether or not a clinician met the standard of care in their suicide assessment, treatment, and management of the case. In the past and in much of current practice, the assessment of suicide has centered on the elicitation of verbalized suicide ideation. But suicide ideation, in the matter of suicide risk determination, may be a necessary but insufficient target of lethal suicide risk assessment. Assessment of near-term suicide risk—assessment that goes beyond verbalized suicide ideation and relates to acute crisis variables—is likely needed. Specifically, such other or additional suicide risk variable assessment may be required in the context of lethal suicide risk situations, as opposed to the discernment of general, nonlethal suicide behavior as a standard of practice (whether a patient is having suicidal thoughts or exhibiting an ambivalent suicide attempt potential). In the current study, verbalized suicide ideation information was unhelpful in the assessment of lethal risk. The Lethal Suicide Risk Assessment, Acute Model, and other dynamic, near-term risk models (such as the Acute Suicide Affective Disorder Model and the Suicide Crisis Syndrome Model)—going beyond elicited suicide ideation—need to be incorporated into current clinical suicide assessment training and become the legal standard of care for expected clinical behavior. Without this expanded clinical assessment perspective, the standard of care for suicide assessment is out of sync with current knowledge—an emerging dilemma for the forensic evaluation of suicide wrongful death cases.
Keywords: Forensic evaluation, standard of care, suicide, suicide assessment, wrongful death.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25617 Comparison of Traditional and Green Building Designs in Egypt: Energy Saving
Authors: Hala M. Abdel Mageed, Ahmed I. Omar, Shady H. E. Abdel Aleem
Abstract:
This paper describes in details a commercial green building that has been designed and constructed in Marsa Matrouh, Egypt. The balance between homebuilding and the sustainable environment has been taken into consideration in the design and construction of this building. The building consists of one floor with 3 m height and 2810 m2 area while the envelope area is 1400 m2. The building construction fulfills the natural ventilation requirements. The glass curtain walls are about 50% of the building and the windows area is 300 m2. 6 mm greenish gray tinted temper glass as outer board lite, 6 mm safety glass as inner board lite and 16 mm thick dehydrated air spaces are used in the building. Visible light with 50% transmission, 0.26 solar factor, 0.67 shading coefficient and 1.3 W/m2.K thermal insulation U-value are implemented to realize the performance requirements. Optimum electrical distribution for lighting system, air conditions and other electrical loads has been carried out. Power and quantity of each type of the lighting system lamps and the energy consumption of the lighting system are investigated. The design of the air conditions system is based on summer and winter outdoor conditions. Ventilated, air conditioned spaces and fresh air rates are determined. Variable Refrigerant Flow (VRF) is the air conditioning system used in this building. The VRF outdoor units are located on the roof of the building and connected to indoor units through refrigerant piping. Indoor units are distributed in all building zones through ducts and air outlets to ensure efficient air distribution. The green building energy consumption is evaluated monthly all over one year and compared with the consumed energy in the non-green conditions using the Hourly Analysis Program (HAP) model. The comparison results show that the total energy consumed per year in the green building is about 1,103,221 kWh while the non-green energy consumption is about 1,692,057 kWh. In other words, the green building total annual energy cost is reduced from 136,581 $ to 89,051 $. This means that, the energy saving and consequently the money-saving of this green construction is about 35%. In addition, 13 points are awarded by applying one of the most popular worldwide green energy certification programs (Leadership in Energy and Environmental Design “LEED”) as a rating system for the green construction. It is concluded that this green building ensures sustainability, saves energy and offers an optimum energy performance with minimum cost.
Keywords: Energy consumption, energy saving, green building, leadership in energy and environmental design, sustainability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 154416 A Self Supervised Bi-directional Neural Network (BDSONN) Architecture for Object Extraction Guided by Beta Activation Function and Adaptive Fuzzy Context Sensitive Thresholding
Authors: Siddhartha Bhattacharyya, Paramartha Dutta, Ujjwal Maulik, Prashanta Kumar Nandi
Abstract:
A multilayer self organizing neural neural network (MLSONN) architecture for binary object extraction, guided by a beta activation function and characterized by backpropagation of errors estimated from the linear indices of fuzziness of the network output states, is discussed. Since the MLSONN architecture is designed to operate in a single point fixed/uniform thresholding scenario, it does not take into cognizance the heterogeneity of image information in the extraction process. The performance of the MLSONN architecture with representative values of the threshold parameters of the beta activation function employed is also studied. A three layer bidirectional self organizing neural network (BDSONN) architecture comprising fully connected neurons, for the extraction of objects from a noisy background and capable of incorporating the underlying image context heterogeneity through variable and adaptive thresholding, is proposed in this article. The input layer of the network architecture represents the fuzzy membership information of the image scene to be extracted. The second layer (the intermediate layer) and the final layer (the output layer) of the network architecture deal with the self supervised object extraction task by bi-directional propagation of the network states. Each layer except the output layer is connected to the next layer following a neighborhood based topology. The output layer neurons are in turn, connected to the intermediate layer following similar topology, thus forming a counter-propagating architecture with the intermediate layer. The novelty of the proposed architecture is that the assignment/updating of the inter-layer connection weights are done using the relative fuzzy membership values at the constituent neurons in the different network layers. Another interesting feature of the network lies in the fact that the processing capabilities of the intermediate and the output layer neurons are guided by a beta activation function, which uses image context sensitive adaptive thresholding arising out of the fuzzy cardinality estimates of the different network neighborhood fuzzy subsets, rather than resorting to fixed and single point thresholding. An application of the proposed architecture for object extraction is demonstrated using a synthetic and a real life image. The extraction efficiency of the proposed network architecture is evaluated by a proposed system transfer index characteristic of the network.Keywords: Beta activation function, fuzzy cardinality, multilayer self organizing neural network, object extraction,
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 156515 Seismic Fragility Assessment of Continuous Integral Bridge Frames with Variable Expansion Joint Clearances
Authors: P. Mounnarath, U. Schmitz, Ch. Zhang
Abstract:
Fragility analysis is an effective tool for the seismic vulnerability assessment of civil structures in the last several years. The design of the expansion joints according to various bridge design codes is almost inconsistent, and only a few studies have focused on this problem so far. In this study, the influence of the expansion joint clearances between the girder ends and the abutment backwalls on the seismic fragility assessment of continuous integral bridge frames is investigated. The gaps (ranging from 60 mm, 150 mm, 250 mm and 350 mm) are designed by following two different bridge design code specifications, namely, Caltrans and Eurocode 8-2. Five bridge models are analyzed and compared. The first bridge model serves as a reference. This model uses three-dimensional reinforced concrete fiber beam-column elements with simplified supports at both ends of the girder. The other four models also employ reinforced concrete fiber beam-column elements but include the abutment backfill stiffness and four different gap values. The nonlinear time history analysis is performed. The artificial ground motion sets, which have the peak ground accelerations (PGAs) ranging from 0.1 g to 1.0 g with an increment of 0.05 g, are taken as input. The soil-structure interaction and the P-Δ effects are also included in the analysis. The component fragility curves in terms of the curvature ductility demand to the capacity ratio of the piers and the displacement demand to the capacity ratio of the abutment sliding bearings are established and compared. The system fragility curves are then obtained by combining the component fragility curves. Our results show that in the component fragility analysis, the reference bridge model exhibits a severe vulnerability compared to that of other sophisticated bridge models for all damage states. In the system fragility analysis, the reference curves illustrate a smaller damage probability in the earlier PGA ranges for the first three damage states, they then show a higher fragility compared to other curves in the larger PGA levels. In the fourth damage state, the reference curve has the smallest vulnerability. In both the component and the system fragility analysis, the same trend is found that the bridge models with smaller clearances exhibit a smaller fragility compared to that with larger openings. However, the bridge model with a maximum clearance still induces a minimum pounding force effect.Keywords: Expansion joint clearance, fiber beam-column element, fragility assessment, time history analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 171514 The Effects of Human Activity in Yasuj Area on the Health of Stream City
Authors: Jamalodin Alvani, Fardin Boustani, Omid Tabiee, Masoud Hashemi
Abstract:
The Yasuj city stream named the Beshar supply water for different usages such as aquaculture farms , drinking, agricultural and industrial usages. Fish processing plants ,Agricultural farms, waste water of industrial zones and hospitals waste water which they are generate by human activity produce a considerable volume of effluent and when they are released in to the stream they can effect on the water quality and down stream aquatic systems. This study was conducted to evaluate the effects of outflow effluent from different human activity and point and non point pollution sources on the water quality and health of the Beshar river next to Yasuj. Yasuj is the biggest and most important city in the Kohkiloye and Boyerahmad province . The Beshar River is one of the most important aquatic ecosystems in the upstream of the Karun watershed in south of Iran which is affected by point and non point pollutant sources . This study was done in order to evaluate the effects of human activities on the water quality and health of the Beshar river. This river is approximately 190 km in length and situated at the geographical positions of 51° 20' to 51° 48' E and 30° 18' to 30° 52' N it is one of the most important aquatic ecosystems of Kohkiloye and Boyerahmad province in south-west Iran. In this research project, five study stations were selected to examine water pollution in the Beshar River systems. Human activity is now one of the most important factors affecting on hydrology and water quality of the Beshar river. Humans use large amounts of resources to sustain various standards of living, although measures of sustainability are highly variable depending on how sustainability is defined. The Beshar river ecosystems are particularly sensitive and vulnerable to human activities. The water samples were analyzed, then some important water quality parameters such as pH, dissolve oxygen (DO), Biochemical Oxygen Demand (BOD5), Chemical Oxygen Demand (COD), Total Suspended Solids (TDS),Turbidity, Temperature, Nitrates (NO3) and Phosphates (PO4) were estimated at the two stations. The results show a downward trend in the water quality at the down stream of the city. The amounts of BOD5,COD,TSS,T,Turbidity, NO3 and PO4 in the down stream stations were considerably more than the station 1. By contrast the amounts of DO in the down stream stations were less than to the station 1. However when effluent discharge consequence of human activities are released into the Beshar river near the city, the quality of river are decreases and the environmental problems of the river during the next years are predicted to rise.Keywords: Health, Human activities, Water pollution, Yasuj , Iran
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2184