Search results for: yield andyield components.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2117

Search results for: yield andyield components.

407 Development and Validation of the Response to Stressful Situations Scale in the General Population

Authors: C. Barreto Carvalho, C. da Motta, M. Sousa, J. Cabral, A. L. Carvalho, E. B. Peixoto

Abstract:

The aim of the current study was to develop and validate a Response to Stressful Situations Scale (RSSS) for the Portuguese population. This scale assesses the degree of stress experienced in scenarios that can constitute positive, negative and more neutral stressors, and also describes the physiological, emotional and behavioral reactions to those events according to their intensity. These scenarios include typical stressor scenarios relevant to patients with schizophrenia, which are currently absent from most scales, assessing specific risks that these stressors may bring on subjects, which may prove useful in non-clinical and clinical populations (i.e. Patients with mood or anxiety disorders, schizophrenia). Results from Principal Components Analysis and Confirmatory Factor Analysis of two adult samples from general population allowed to confirm a three-factor model with good fit indices: χ2 (144)= 370.211, p = 0.000; GFI = 0.928; CFI = 0.927; TLI = 0.914, RMSEA = 0.055, P(rmsea ≤0.005) = .096; PCFI = .781. Further data analysis of the scale revealed that RSSS is an adequate assessment tool of stress response in adults to be used in further research and clinical settings, with good psychometric characteristics, adequate divergent and convergent validity, good temporal stability and high internal consistency.

Keywords: Assessment, stress events, stress response, stress vulnerability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2084
406 Influence of Microstructural Features on Wear Resistance of Biomedical Titanium Materials

Authors: Mohsin T. Mohammed, Zahid A. Khan, Arshad N. Siddiquee

Abstract:

The field of biomedical materials plays an imperative requisite and a critical role in manufacturing a variety of biological artificial replacements in a modern world. Recently, titanium (Ti) materials are being used as biomaterials because of their superior corrosion resistance and tremendous specific strength, free- allergic problems and the greatest biocompatibility compared to other competing biomaterials such as stainless steel, Co-Cr alloys, ceramics, polymers, and composite materials. However, regardless of these excellent performance properties, Implantable Ti materials have poor shear strength and wear resistance which limited their applications as biomaterials. Even though the wear properties of Ti alloys has revealed some improvements, the crucial effectiveness of biomedical Ti alloys as wear components requires a comprehensive deep understanding of the wear reasons, mechanisms, and techniques that can be used to improve wear behavior. This review examines current information on the effect of thermal and thermomechanical processing of implantable Ti materials on the long-term prosthetic requirement which related with wear behavior. This paper focuses mainly on the evolution, evaluation and development of effective microstructural features that can improve wear properties of bio grade Ti materials using thermal and thermomechanical treatments.

Keywords: Wear Resistance, Heat Treatment, Thermomechanical Processing, Biomedical Titanium Materials.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3622
405 Dynamic Response of Strain Rate Dependent Glass/Epoxy Composite Beams Using Finite Difference Method

Authors: M. M. Shokrieh, A. Karamnejad

Abstract:

This paper deals with a numerical analysis of the transient response of composite beams with strain rate dependent mechanical properties by use of a finite difference method. The equations of motion based on Timoshenko beam theory are derived. The geometric nonlinearity effects are taken into account with von Kármán large deflection theory. The finite difference method in conjunction with Newmark average acceleration method is applied to solve the differential equations. A modified progressive damage model which accounts for strain rate effects is developed based on the material property degradation rules and modified Hashin-type failure criteria and added to the finite difference model. The components of the model are implemented into a computer code in Mathematica 6. Glass/epoxy laminated composite beams with constant and strain rate dependent mechanical properties under dynamic load are analyzed. Effects of strain rate on dynamic response of the beam for various stacking sequences, load and boundary conditions are investigated.

Keywords: Composite beam, Finite difference method, Progressive damage modeling, Strain rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1958
404 Alignment between Understanding and Assessment Practice among Secondary School Teachers

Authors: Eftah Bte. Moh @ Hj Abdullah, Izazol Binti Idris, Abd Aziz Bin Abd Shukor

Abstract:

This study aimed to identify the alignment of understanding and assessment practices among secondary school teachers. The study was carried out using quantitative descriptive study. The sample consisted of 164 teachers who taught Form 1 and 2 from 11 secondary schools in the district of North Kinta, Perak, Malaysia. Data were obtained from 164 respondents who answered Expectation Alignment Understanding and Practices of School Assessment (PEKDAPS) questionnaire. The data were analysed using SPSS 17.0+. The Cronbach’s alpha value obtained through PEKDAPS questionnaire pilot study was 0.86. The results showed that teachers' performance in PEKDAPS based on the mean value was less than 3, which means that perfect alignment does not occur between the understanding and practices of school assessment. Two major PEKDAPS sub-constructs of articulation across grade and age and usability of the system were higher than the moderate alignment of the understanding and practices of school assessment (Min=2.0). The content focused of PEKDAPs sub-constructs which showed lower than the moderate alignment of the understanding and practices of school assessment (Min=2.0). Another two PEKDAPS subconstructs of transparency and fairness and the pedagogical implications showed moderate alignment (2.0). The implications of the study is that teachers need to fully understand the importance of alignment among components of assessment, learning and teaching and learning objectives as strategies to achieve quality assessment process.

Keywords: Alignment, assessment practices, School Based Assessment, understanding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1970
403 Condition Monitoring in the Management of Maintenance in a Large Scale Precision CNC Machining Manufacturing Facility

Authors: N. Ahmed, A.J. Day, J.L. Victory L. Zeall, B. Young

Abstract:

The manufacture of large-scale precision aerospace components using CNC requires a highly effective maintenance strategy to ensure that the required accuracy can be achieved over many hours of production. This paper reviews a strategy for a maintenance management system based on Failure Mode Avoidance, which uses advanced techniques and technologies to underpin a predictive maintenance strategy. It is shown how condition monitoring (CM) is important to predict potential failures in high precision machining facilities and achieve intelligent and integrated maintenance management. There are two distinct ways in which CM can be applied. One is to monitor key process parameters and observe trends which may indicate a gradual deterioration of accuracy in the product. The other is the use of CM techniques to monitor high status machine parameters enables trends to be observed which can be corrected before machine failure and downtime occurs. It is concluded that the key to developing a flexible and intelligent maintenance framework in any precision manufacturing operation is the ability to evaluate reliably and routinely machine tool condition using condition monitoring techniques within a framework of Failure Mode Avoidance.

Keywords: Maintenance, Condition Monitoring, CNC, Machining, Accuracy, Capability, Key Process Parameters, Critical Parameters

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2193
402 Automatic Adjustment of Thresholds via Closed-Loop Feedback Mechanism for Solder Paste Inspection

Authors: Chia-Chen Wei, Pack Hsieh, Jeffrey Chen

Abstract:

Surface Mount Technology (SMT) is widely used in the area of the electronic assembly in which the electronic components are mounted to the surface of the printed circuit board (PCB). Most of the defects in the SMT process are mainly related to the quality of solder paste printing. These defects lead to considerable manufacturing costs in the electronics assembly industry. Therefore, the solder paste inspection (SPI) machine for controlling and monitoring the amount of solder paste printing has become an important part of the production process. So far, the setting of the SPI threshold is based on statistical analysis and experts’ experiences to determine the appropriate threshold settings. Because the production data are not normal distribution and there are various variations in the production processes, defects related to solder paste printing still occur. In order to solve this problem, this paper proposes an online machine learning algorithm, called the automatic threshold adjustment (ATA) algorithm, and closed-loop architecture in the SMT process to determine the best threshold settings. Simulation experiments prove that our proposed threshold settings improve the accuracy from 99.85% to 100%.

Keywords: Big data analytics, Industry 4.0, SPI threshold setting, surface mount technology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 759
401 Cluster Analysis of Retailers’ Benefits from Their Cooperation with Manufacturers: Business Models Perspective

Authors: M. K. Witek-Hajduk, T. M. Napiórkowski

Abstract:

A number of studies discussed the topic of benefits of retailers-manufacturers cooperation and coopetition. However, there are only few publications focused on the benefits of cooperation and coopetition between retailers and their suppliers of durable consumer goods; especially in the context of business model of cooperating partners. This paper aims to provide a clustering approach to segment retailers selling consumer durables according to the benefits they obtain from their cooperation with key manufacturers and differentiate the said retailers’ in term of the business models of cooperating partners. For the purpose of the study, a survey (with a CATI method) collected data on 603 consumer durables retailers present on the Polish market. Retailers are clustered both, with hierarchical and non-hierarchical methods. Five distinctive groups of consumer durables’ retailers are (based on the studied benefits) identified using the two-stage clustering approach. The clusters are then characterized with a set of exogenous variables, key of which are business models employed by the retailer and its partnering key manufacturer. The paper finds that the a combination of a medium sized retailer classified as an Integrator with a chiefly domestic capital and a manufacturer categorized as a Market Player will yield the highest benefits. On the other side of the spectrum is medium sized Distributor retailer with solely domestic capital – in this case, the business model of the cooperating manufactrer appears to be irreleveant. This paper is the one of the first empirical study using cluster analysis on primary data that defines the types of cooperation between consumer durables’ retailers and manufacturers – their key suppliers. The analysis integrates a perspective of both retailers’ and manufacturers’ business models and matches them with individual and joint benefits.

Keywords: Business model, cooperation, cluster analysis, retailer-manufacturer relationships.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1084
400 Image Clustering Framework for BAVM Segmentation in 3DRA Images: Performance Analysis

Authors: FH. Sarieddeen, R. El Berbari, S. Imad, J. Abdel Baki, M. Hamad, R. Blanc, A. Nakib, Y.Chenoune

Abstract:

Brain ArterioVenous Malformation (BAVM) is an abnormal tangle of brain blood vessels where arteries shunt directly into veins with no intervening capillary bed which causes high pressure and hemorrhage risk. The success of treatment by embolization in interventional neuroradiology is highly dependent on the accuracy of the vessels visualization. In this paper the performance of clustering techniques on vessel segmentation from 3- D rotational angiography (3DRA) images is investigated and a new technique of segmentation is proposed. This method consists in: preprocessing step of image enhancement, then K-Means (KM), Fuzzy C-Means (FCM) and Expectation Maximization (EM) clustering are used to separate vessel pixels from background and artery pixels from vein pixels when possible. A post processing step of removing false-alarm components is applied before constructing a three-dimensional volume of the vessels. The proposed method was tested on six datasets along with a medical assessment of an expert. Obtained results showed encouraging segmentations.

Keywords: Brain arteriovenous malformation (BAVM), 3-D rotational angiography (3DRA), K-Means (KM) clustering, Fuzzy CMeans (FCM) clustering, Expectation Maximization (EM) clustering, volume rendering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1861
399 The Robust Clustering with Reduction Dimension

Authors: Dyah E. Herwindiati

Abstract:

A clustering is process to identify a homogeneous groups of object called as cluster. Clustering is one interesting topic on data mining. A group or class behaves similarly characteristics. This paper discusses a robust clustering process for data images with two reduction dimension approaches; i.e. the two dimensional principal component analysis (2DPCA) and principal component analysis (PCA). A standard approach to overcome this problem is dimension reduction, which transforms a high-dimensional data into a lower-dimensional space with limited loss of information. One of the most common forms of dimensionality reduction is the principal components analysis (PCA). The 2DPCA is often called a variant of principal component (PCA), the image matrices were directly treated as 2D matrices; they do not need to be transformed into a vector so that the covariance matrix of image can be constructed directly using the original image matrices. The decomposed classical covariance matrix is very sensitive to outlying observations. The objective of paper is to compare the performance of robust minimizing vector variance (MVV) in the two dimensional projection PCA (2DPCA) and the PCA for clustering on an arbitrary data image when outliers are hiden in the data set. The simulation aspects of robustness and the illustration of clustering images are discussed in the end of paper

Keywords: Breakdown point, Consistency, 2DPCA, PCA, Outlier, Vector Variance

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1667
398 Biogas from Cover Crops and Field Residues: Effects on Soil, Water, Climate and Ecological Footprint

Authors: Manfred Szerencsits, Christine Weinberger, Maximilian Kuderna, Franz Feichtinger, Eva Erhart, Stephan Maier

Abstract:

Cover or catch crops have beneficial effects for soil, water, erosion, etc. If harvested, they also provide feedstock for biogas without competition for arable land in regions, where only one main crop can be produced per year. On average gross energy yields of approx. 1300 m³ methane (CH4) ha-1 can be expected from 4.5 tonnes (t) of cover crop dry matter (DM) in Austria. Considering the total energy invested from cultivation to compression for biofuel use a net energy yield of about 1000 m³ CH4 ha-1 is remaining. With the straw of grain maize or Corn Cob Mix (CCM) similar energy yields can be achieved. In comparison to catch crops remaining on the field as green manure or to complete fallow between main crops the effects on soil, water and climate can be improved if cover crops are harvested without soil compaction and digestate is returned to the field in an amount equivalent to cover crop removal. In this way, the risk of nitrate leaching can be reduced approx. by 25% in comparison to full fallow. The risk of nitrous oxide emissions may be reduced up to 50% by contrast with cover crops serving as green manure. The effects on humus content and erosion are similar or better than those of cover crops used as green manure when the same amount of biomass was produced. With higher biomass production the positive effects increase even if cover crops are harvested and the only digestate is brought back to the fields. The ecological footprint of arable farming can be reduced by approx. 50% considering the substitution of natural gas with CH4 produced from cover crops.

Keywords: Biogas, cover crops, catch crops, land use competition, sustainable agriculture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1262
397 Software Maintenance Severity Prediction for Object Oriented Systems

Authors: Parvinder S. Sandhu, Roma Jaswal, Sandeep Khimta, Shailendra Singh

Abstract:

As the majority of faults are found in a few of its modules so there is a need to investigate the modules that are affected severely as compared to other modules and proper maintenance need to be done in time especially for the critical applications. As, Neural networks, which have been already applied in software engineering applications to build reliability growth models predict the gross change or reusability metrics. Neural networks are non-linear sophisticated modeling techniques that are able to model complex functions. Neural network techniques are used when exact nature of input and outputs is not known. A key feature is that they learn the relationship between input and output through training. In this present work, various Neural Network Based techniques are explored and comparative analysis is performed for the prediction of level of need of maintenance by predicting level severity of faults present in NASA-s public domain defect dataset. The comparison of different algorithms is made on the basis of Mean Absolute Error, Root Mean Square Error and Accuracy Values. It is concluded that Generalized Regression Networks is the best algorithm for classification of the software components into different level of severity of impact of the faults. The algorithm can be used to develop model that can be used for identifying modules that are heavily affected by the faults.

Keywords: Neural Network, Software faults, Software Metric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1542
396 Thermogravimetry Study on Pyrolysis of Various Lignocellulosic Biomass for Potential Hydrogen Production

Authors: S.S. Abdullah, S. Yusup, M.M. Ahmad, A. Ramli, L. Ismail

Abstract:

This paper aims to study decomposition behavior in pyrolytic environment of four lignocellulosic biomass (oil palm shell, oil palm frond, rice husk and paddy straw), and two commercial components of biomass (pure cellulose and lignin), performed in a thermogravimetry analyzer (TGA). The unit which consists of a microbalance and a furnace flowed with 100 cc (STP) min-1 Nitrogen, N2 as inert. Heating rate was set at 20⁰C min-1 and temperature started from 50 to 900⁰C. Hydrogen gas production during the pyrolysis was observed using Agilent Gas Chromatography Analyzer 7890A. Oil palm shell, oil palm frond, paddy straw and rice husk were found to be reactive enough in a pyrolytic environment of up to 900°C since pyrolysis of these biomass starts at temperature as low as 200°C and maximum value of weight loss is achieved at about 500°C. Since there was not much different in the cellulose, hemicelluloses and lignin fractions between oil palm shell, oil palm frond, paddy straw and rice husk, the T-50 and R-50 values obtained are almost similar. H2 productions started rapidly at this temperature as well due to the decompositions of biomass inside the TGA. Biomass with more lignin content such as oil palm shell was found to have longer duration of H2 production compared to materials of high cellulose and hemicelluloses contents.

Keywords: biomass, decomposition, hydrogen, lignocellulosic, thermogravimetry

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2222
395 Applications of Mobile Aluminum Light Structure Housing System in Sustainable Building Process

Authors: Haining Wang, Hong Zhang

Abstract:

Problems exist in the present construction industry in China. Conflicts hinder the development of the whole society, such as contradictions between resource reservation and a huge population, living space needs and low building production efficiency, as well as environment protection and high pollution production pattern. In order to solve the problems and find a solution, research is needed to explore a building system. By investigating the whole architectural process and contrasting analysis of light structures and heavy structures, the paper raised the concepts to cope with the existing challenges, such as design conception based on product and real construction processes, design methods focusing on components, and maximum utilization of the temporary building by optimizing the construction speed and building performance. The project was not only designed in virtual reality, but was also physically constructed in the real world. A series of aluminum light structure housing systems were dictated at last, with the characteristics of high performance, extremely rapid construction speed and also flexible function. It can be used in lots of aspects ranging from a single building in a remote area to a large residential community.

Keywords: Aluminum house, light structure, rapid assembly, repeat construction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1492
394 Highly Scalable, Reversible and Embedded Image Compression System

Authors: Federico Pérez González, Iñaki Goiricelaia Ordorika, Pedro Iriondo Bengoa

Abstract:

A new method for low complexity image coding is presented, that permits different settings and great scalability in the generation of the final bit stream. This coding presents a continuoustone still image compression system that groups loss and lossless compression making use of finite arithmetic reversible transforms. Both transformation in the space of color and wavelet transformation are reversible. The transformed coefficients are coded by means of a coding system in depending on a subdivision into smaller components (CFDS) similar to the bit importance codification. The subcomponents so obtained are reordered by means of a highly configure alignment system depending on the application that makes possible the re-configure of the elements of the image and obtaining different levels of importance from which the bit stream will be generated. The subcomponents of each level of importance are coded using a variable length entropy coding system (VBLm) that permits the generation of an embedded bit stream. This bit stream supposes itself a bit stream that codes a compressed still image. However, the use of a packing system on the bit stream after the VBLm allows the realization of a final highly scalable bit stream from a basic image level and one or several enhance levels.

Keywords: Image compression, wavelet transform, highlyscalable, reversible transform, embedded, subcomponents.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1376
393 Modeling Directional Thermal Radiance Anisotropy for Urban Canopy

Authors: Limin Zhao, Xingfa Gu, C. Tao Yu

Abstract:

one of the significant factors for improving the accuracy of Land Surface Temperature (LST) retrieval is the correct understanding of the directional anisotropy for thermal radiance. In this paper, the multiple scattering effect between heterogeneous non-isothermal surfaces is described rigorously according to the concept of configuration factor, based on which a directional thermal radiance model is built, and the directional radiant character for urban canopy is analyzed. The model is applied to a simple urban canopy with row structure to simulate the change of Directional Brightness Temperature (DBT). The results show that the DBT is aggrandized because of the multiple scattering effects, whereas the change range of DBT is smoothed. The temperature difference, spatial distribution, emissivity of the components can all lead to the change of DBT. The “hot spot" phenomenon occurs when the proportion of high temperature component in the vision field came to a head. On the other hand, the “cool spot" phenomena occur when low temperature proportion came to the head. The “spot" effect disappears only when the proportion of every component keeps invariability. The model built in this paper can be used for the study of directional effect on emissivity, the LST retrieval over urban areas and the adjacency effect of thermal remote sensing pixels.

Keywords: Directional thermal radiance, multiple scattering, configuration factor, urban canopy, hot spot effect

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1573
392 Influence of Optical Fluence Distribution on Photoacoustic Imaging

Authors: Mohamed K. Metwally, Sherif H. El-Gohary, Kyung Min Byun, Seung Moo Han, Soo Yeol Lee, Min Hyoung Cho, Gon Khang, Jinsung Cho, Tae-Seong Kim

Abstract:

Photoacoustic imaging (PAI) is a non-invasive and non-ionizing imaging modality that combines the absorption contrast of light with ultrasound resolution. Laser is used to deposit optical energy into a target (i.e., optical fluence). Consequently, the target temperature rises, and then thermal expansion occurs that leads to generating a PA signal. In general, most image reconstruction algorithms for PAI assume uniform fluence within an imaging object. However, it is known that optical fluence distribution within the object is non-uniform. This could affect the reconstruction of PA images. In this study, we have investigated the influence of optical fluence distribution on PA back-propagation imaging using finite element method. The uniform fluence was simulated as a triangular waveform within the object of interest. The non-uniform fluence distribution was estimated by solving light propagation within a tissue model via Monte Carlo method. The results show that the PA signal in the case of non-uniform fluence is wider than the uniform case by 23%. The frequency spectrum of the PA signal due to the non-uniform fluence has missed some high frequency components in comparison to the uniform case. Consequently, the reconstructed image with the non-uniform fluence exhibits a strong smoothing effect.

Keywords: Finite Element Method, Fluence Distribution, Monte Carlo Method, Photoacoustic Imaging.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2636
391 Motivational Antecedents that Influenced a Higher Education Institution in the Philippines to Adopt Enterprise Architecture

Authors: Ma. Eliza Jijeth V. dela Cruz

Abstract:

Technology is a recent prodigy in people’s everyday life that has taken off. It infiltrated almost every aspect of one’s lives, changing how people work, how people learn and how people perceive things. Academic Institutions, just like other organizations, have deeply modified its strategies to integrate technology into the institutional vision and corporate strategy that has never been greater. Information and Communications Technology (ICT) continues to be recognized as a major factor in organizations realizing its aims and objectives. Consequently, ICT has an important role in the mobilization of an academic institution’s strategy to support the delivery of operational, strategic or transformational objectives. This ICT strategy should align the institution with the radical changes of the ICT world through the use of Enterprise Architecture (EA). Hence, EA’s objective is to optimize the islands of legacy processes to be integrated that is receptive to change and supportive of the delivery of the strategy. In this paper, the focus is to explore the motivational antecedents during the adoption of EA in a Higher Education Institution in the Philippines for its ICT strategic plan. The seven antecedents (viewpoint, stakeholders, human traits, vision, revolutionary innovation, techniques and change components) provide understanding into EA adoption and the antecedents that influences the process of EA adoption.

Keywords: Enterprise architecture, adoption, antecedents, higher education institution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 815
390 Exergetic and Sustainability Evaluation of a Building Heating System in Izmir, Turkey

Authors: Nurdan Yildirim, Arif Hepbasli

Abstract:

Heating, cooling and lighting appliances in buildings account for more than one third of the world’s primary energy demand. Therefore, main components of the building heating systems play an essential role in terms of energy consumption. In this context, efficient energy and exergy utilization in HVAC-R systems has been very essential, especially in developing energy policies towards increasing efficiencies. The main objective of the present study is to assess the performance of a family house with a volume of 326.7 m3 and a net floor area of 121 m2, located in the city of Izmir, Turkey in terms of energetic, exergetic and sustainability aspects. The indoor and exterior air temperatures are taken as 20°C and 1°C, respectively. In the analysis and assessment, various metrics (indices or indicators) such as exergetic efficiency, exergy flexibility ratio and sustainability index are utilized. Two heating options (Case 1: condensing boiler and Case 2: air heat pump) are considered for comparison purposes. The total heat loss rate of the family house is determined to be 3770.72 W. The overall energy efficiencies of the studied cases are calculated to be 49.4% for Case 1 and 54.7% for Case 2. The overall exergy efficiencies, the flexibility factor and the sustainability index of Cases 1 and 2 are computed to be around 3.3%, 0.17 and 1.034, respectively.

Keywords: Buildings, exergy, low exergy, sustainability, efficiency, heating, renewable energy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2022
389 Hydrodynamic Simulation of Co-Current and Counter Current of Column Distillation Using Euler Lagrange Approach

Authors: H. Troudi, M. Ghiss, Z. Tourki, M. Ellejmi

Abstract:

Packed columns of liquefied petroleum gas (LPG) consists of separating the liquid mixture of propane and butane to pure gas components by the distillation phenomenon. The flow of the gas and liquid inside the columns is operated by two ways: The co-current and the counter current operation. Heat, mass and species transfer between phases represent the most important factors that influence the choice between those two operations. In this paper, both processes are discussed using computational CFD simulation through ANSYS-Fluent software. Only 3D half section of the packed column was considered with one packed bed. The packed bed was characterized in our case as a porous media. The simulations were carried out at transient state conditions. A multi-component gas and liquid mixture were used out in the two processes. We utilized the Euler-Lagrange approach in which the gas was treated as a continuum phase and the liquid as a group of dispersed particles. The heat and the mass transfer process was modeled using multi-component droplet evaporation approach. The results show that the counter-current process performs better than the co-current, although such limitations of our approach are noted. This comparison gives accurate results for computations times higher than 2 s, at different gas velocity and at packed bed porosity of 0.9.

Keywords: Co-current, counter current, Euler Lagrange model, heat transfer, mass transfer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1320
388 Reliability Analysis of Press Unit using Vague Set

Authors: S. P. Sharma, Monica Rani

Abstract:

In conventional reliability assessment, the reliability data of system components are treated as crisp values. The collected data have some uncertainties due to errors by human beings/machines or any other sources. These uncertainty factors will limit the understanding of system component failure due to the reason of incomplete data. In these situations, we need to generalize classical methods to fuzzy environment for studying and analyzing the systems of interest. Fuzzy set theory has been proposed to handle such vagueness by generalizing the notion of membership in a set. Essentially, in a Fuzzy Set (FS) each element is associated with a point-value selected from the unit interval [0, 1], which is termed as the grade of membership in the set. A Vague Set (VS), as well as an Intuitionistic Fuzzy Set (IFS), is a further generalization of an FS. Instead of using point-based membership as in FS, interval-based membership is used in VS. The interval-based membership in VS is more expressive in capturing vagueness of data. In the present paper, vague set theory coupled with conventional Lambda-Tau method is presented for reliability analysis of repairable systems. The methodology uses Petri nets (PN) to model the system instead of fault tree because it allows efficient simultaneous generation of minimal cuts and path sets. The presented method is illustrated with the press unit of the paper mill.

Keywords: Lambda -Tau methodology, Petri nets, repairable system, vague fuzzy set.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1495
387 Development of Piezoelectric Gas Micro Pumps with the PDMS Check Valve Design

Authors: Chiang-Ho Cheng, An-Shik Yang, Hong-Yih Cheng, Ming-Yu Lai

Abstract:

This paper presents the design and fabrication of a novel piezoelectric actuator for a gas micro pump with check valve having the advantages of miniature size, light weight and low power consumption. The micro pump is designed to have eight major components, namely a stainless steel upper cover layer, a piezoelectric actuator, a stainless steel diaphragm, a PDMS chamber layer, two stainless steel channel layers with two valve seats, a PDMS check valve layer with two cantilever-type check valves and an acrylic substrate. A prototype of the gas micro pump, with a size of 52 mm × 50 mm × 5.0 mm, is fabricated by precise manufacturing. This device is designed to pump gases with the capability of performing the self-priming and bubble-tolerant work mode by maximizing the stroke volume of the membrane as well as the compression ratio via minimization of the dead volume of the micro pump chamber and channel. By experiment apparatus setup, we can get the real-time values of the flow rate of micro pump and the displacement of the piezoelectric actuator, simultaneously. The gas micro pump obtained higher output performance under the sinusoidal waveform of 250 Vpp. The micro pump achieved the maximum pumping rates of 1185 ml/min and back pressure of 7.14 kPa at the corresponding frequency of 120 and 50 Hz.

Keywords: PDMS, Check valve, Micro pump, Piezoelectric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1987
386 Removal of CO2 and H2S using Aqueous Alkanolamine Solusions

Authors: Zare Aliabad, H., Mirzaei, S.

Abstract:

This work presents a theoretical investigation of the simultaneous absorption of CO2 and H2S into aqueous solutions of MDEA and DEA. In this process the acid components react with the basic alkanolamine solution via an exothermic, reversible reaction in a gas/liquid absorber. The use of amine solvents for gas sweetening has been investigated using process simulation programs called HYSYS and ASPEN. We use Electrolyte NRTL and Amine Package and Amines (experimental) equation of state. The effects of temperature and circulation rate and amine concentration and packed column and murphree efficiency on the rate of absorption were studied. When lean amine flow and concentration increase, CO2 and H2S absorption increase too. With the improvement of inlet amine temperature in absorber, CO2 and H2S penetrate to upper stages of absorber and absorption of acid gases in absorber decreases. The CO2 concentration in the clean gas can be greatly influenced by the packing height, whereas for the H2S concentration in the clean gas the packing height plays a minor role. HYSYS software can not estimate murphree efficiency correctly and it applies the same contributions in all diagrams for HYSYS software. By improvement in murphree efficiency, maximum temperature of absorber decrease and the location of reaction transfer to the stages of bottoms absorber and the absorption of acid gases increase.

Keywords: Absorber, DEA, MDEA, Simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17261
385 Stackelberg Security Game for Optimizing Security of Federated Internet of Things Platform Instances

Authors: Violeta Damjanovic-Behrendt

Abstract:

This paper presents an approach for optimal cyber security decisions to protect instances of a federated Internet of Things (IoT) platform in the cloud. The presented solution implements the repeated Stackelberg Security Game (SSG) and a model called Stochastic Human behaviour model with AttRactiveness and Probability weighting (SHARP). SHARP employs the Subjective Utility Quantal Response (SUQR) for formulating a subjective utility function, which is based on the evaluations of alternative solutions during decision-making. We augment the repeated SSG (including SHARP and SUQR) with a reinforced learning algorithm called Naïve Q-Learning. Naïve Q-Learning belongs to the category of active and model-free Machine Learning (ML) techniques in which the agent (either the defender or the attacker) attempts to find an optimal security solution. In this way, we combine GT and ML algorithms for discovering optimal cyber security policies. The proposed security optimization components will be validated in a collaborative cloud platform that is based on the Industrial Internet Reference Architecture (IIRA) and its recently published security model.

Keywords: Security, internet of things, cloud computing, Stackelberg security game, machine learning, Naïve Q-learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1506
384 QSAR Studies of Certain Novel Heterocycles Derived from Bis-1, 2, 4 Triazoles as Anti-Tumor Agents

Authors: Madhusudan Purohit, Stephen Philip, Bharathkumar Inturi

Abstract:

In this paper we report the quantitative structure activity relationship of novel bis-triazole derivatives for predicting the activity profile. The full model encompassed a dataset of 46 Bis- triazoles. Tripos Sybyl X 2.0 program was used to conduct CoMSIA QSAR modeling. The Partial Least-Squares (PLS) analysis method was used to conduct statistical analysis and to derive a QSAR model based on the field values of CoMSIA descriptor. The compounds were divided into test and training set. The compounds were evaluated by various CoMSIA parameters to predict the best QSAR model. An optimum numbers of components were first determined separately by cross-validation regression for CoMSIA model, which were then applied in the final analysis. A series of parameters were used for the study and the best fit model was obtained using donor, partition coefficient and steric parameters. The CoMSIA models demonstrated good statistical results with regression coefficient (r2) and the cross-validated coefficient (q2) of 0.575 and 0.830 respectively. The standard error for the predicted model was 0.16322. In the CoMSIA model, the steric descriptors make a marginally larger contribution than the electrostatic descriptors. The finding that the steric descriptor is the largest contributor for the CoMSIA QSAR models is consistent with the observation that more than half of the binding site area is occupied by steric regions.

Keywords: 3D QSAR, CoMSIA, Triazoles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1450
383 The Mechanical and Electrochemical Properties of DC-Electrodeposited Ni-Mn Alloy Coating with Low Internal Stress

Authors: Chun-Ying Lee, Kuan-Hui Cheng, Mei-Wen Wu

Abstract:

The nickel-manganese (Ni-Mn) alloy coating prepared from DC electrodeposition process in sulphamate bath was studied. The effects of process parameters, such as current density and electrolyte composition, on the cathodic current efficiency, microstructure, internal stress and mechanical properties were investigated. Because of its crucial effect on the application to the electroforming of microelectronic components, the development of low internal stress coating with high leveling power was emphasized. It was found that both the coating’s manganese content and the cathodic current efficiency increased with the raise in current density. In addition, the internal stress of the deposited coating showed compressive nature at low current densities while changed to tensile one at higher current densities. Moreover, the metallographic observation, X-ray diffraction measurement, and polarization curve measurement were conducted. It was found that the Ni-Mn coating consisted of nano-sized columnar grains and the maximum hardness of the coating was associated with (111) preferred orientation in the microstructure. The grain size was refined along with the increase in the manganese content of the coating, which accordingly, raised its hardness and resistance to annealing softening. In summary, the Ni-Mn coating prepared at lower current density of 1-2 A/dm2 had low internal stress, high leveling power, and better corrosion resistance.

Keywords: DC plating, internal stress, leveling power, Ni-Mn coating.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1989
382 Optimization of Double Wishbone Suspension System with Variable Camber Angle by Hydraulic Mechanism

Authors: Mohammad Iman Mokhlespour Esfahani, Masoud Mosayebi, Mohammad Pourshams, Ahmad Keshavarzi

Abstract:

Simulation accuracy by recent dynamic vehicle simulation multidimensional expression significantly has progressed and acceptable results not only for passive vehicles but also for active vehicles normally equipped with advanced electronic components is also provided. Recently, one of the subjects that has it been considered, is increasing the safety car in design. Therefore, many efforts have been done to increase vehicle stability especially in the turn. One of the most important efforts is adjusting the camber angle in the car suspension system. Optimum control camber angle in addition to the vehicle stability is effective in the wheel adhesion on road, reducing rubber abrasion and acceleration and braking. Since the increase or decrease in the camber angle impacts on the stability of vehicles, in this paper, a car suspension system mechanism is introduced that could be adjust camber angle and the mechanism is application and also inexpensive. In order to reach this purpose, in this paper, a passive double wishbone suspension system with variable camber angle is introduced and then variable camber mechanism designed and analyzed for study the designed system performance, this mechanism is modeled in Visual Nastran software and kinematic analysis is revealed.

Keywords: Suspension molding, double wishbone, variablecamber, hydraulic mechanism

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7687
381 Apoptosis Pathway Targeted by Thymoquinone in MCF7 Breast Cancer Cell Line

Authors: M. Marjaneh, M. Y. Narazah, H. Shahrul

Abstract:

Array-based gene expression analysis is a powerful tool to profile expression of genes and to generate information on therapeutic effects of new anti-cancer compounds. Anti-apoptotic effect of thymoquinone was studied in MCF7 breast cancer cell line using gene expression profiling with cDNA microarray. The purity and yield of RNA samples were determined using RNeasyPlus Mini kit. The Agilent RNA 6000 NanoLabChip kit evaluated the quantity of the RNA samples. AffinityScript RT oligo-dT promoter primer was used to generate cDNA strands. T7 RNA polymerase was used to convert cDNA to cRNA. The cRNA samples and human universal reference RNA were labelled with Cy-3-CTP and Cy-5-CTP, respectively. Feature Extraction and GeneSpring softwares analysed the data. The single experiment analysis revealed involvement of 64 pathways with up-regulated genes and 78 pathways with downregulated genes. The MAPK and p38-MAPK pathways were inhibited due to the up-regulation of PTPRR gene. The inhibition of p38-MAPK suggested up-regulation of TGF-ß pathway. Inhibition of p38-MAPK caused up-regulation of TP53 and down-regulation of Bcl2 genes indicating involvement of intrinsic apoptotic pathway. Down-regulation of CARD16 gene as an adaptor molecule regulated CASP1 and suggested necrosis-like programmed cell death and involvement of caspase in apoptosis. Furthermore, down-regulation of GPCR, EGF-EGFR signalling pathways suggested reduction of ER. Involvement of AhR pathway which control cytochrome P450 and glucuronidation pathways showed metabolism of Thymoquinone. The findings showed differential expression of several genes in apoptosis pathways with thymoquinone treatment in estrogen receptor-positive breast cancer cells.

Keywords: CARD16, CASP10, cDNA microarray, PTPRR, Thymoquinone.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2247
380 Six Sigma Solutions and its Benefit-Cost Ratio for Quality Improvement

Authors: S. Homrossukon, A. Anurathapunt

Abstract:

This is an application research presenting the improvement of production quality using the six sigma solutions and the analyses of benefit-cost ratio. The case of interest is the production of tile-concrete. Such production has faced with the problem of high nonconforming products from an inappropriate surface coating and had low process capability based on the strength property of tile. Surface coating and tile strength are the most critical to quality of this product. The improvements followed five stages of six sigma solutions. After the improvement, the production yield was improved to 80% as target required and the defective products from coating process was remarkably reduced from 29.40% to 4.09%. The process capability based on the strength quality was increased from 0.87 to 1.08 as customer oriented. The improvement was able to save the materials loss for 3.24 millions baht or 0.11 million dollars. The benefits from the improvement were analyzed from (1) the reduction of the numbers of non conforming tile using its factory price for surface coating improvement and (2) the materials saved from the increment of process capability. The benefit-cost ratio of overall improvement was high as 7.03. It was non valuable investment in define, measure, analyses and the initial of improve stages after that it kept increasing. This was due to there were no benefits in define, measure, and analyze stages of six sigma since these three stages mainly determine the cause of problem and its effects rather than improve the process. The benefit-cost ratio starts existing in the improve stage and go on. Within each stage, the individual benefitcost ratio was much higher than the accumulative one as there was an accumulation of cost since the first stage of six sigma. The consideration of the benefit-cost ratio during the improvement project helps make decisions for cost saving of similar activities during the improvement and for new project. In conclusion, the determination of benefit-cost ratio behavior through out six sigma implementation period provides the useful data for managing quality improvement for the optimal effectiveness. This is the additional outcome from the regular proceeding of six sigma.

Keywords: Six Sigma Solutions, Process Improvement, QualityManagement, Benefit Cost Ratio

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2096
379 Shock Induced Damage onto Free-Standing Objects in an Earthquake

Authors: Haider AlAbadi, Joe Petrolito, Nelson Lam, Emad Gad

Abstract:

In areas of low to moderate seismicity many building contents and equipment are not positively fixed to the floor or tied to adjacent walls. Under seismic induced horizontal vibration, such contents and equipment can suffer from damage by either overturning or impact associated with rocking. This paper focuses on the estimation of shock on typical contents and equipment due to rocking. A simplified analytical model is outlined that can be used to estimate the maximum acceleration on a rocking object given its basic geometric and mechanical properties. The developed model was validated against experimental results. The experimental results revealed that the maximum shock acceleration can be underestimated if the static stiffness of the materials at the interface between the rocking object and floor is used rather than the dynamic stiffness. Excellent agreement between the model and experimental results was found when the dynamic stiffness for the interface material was used, which was found to be generally much higher than corresponding static stiffness under different investigated boundary conditions of the cushion. The proposed model can be a beneficial tool in performing a rapid assessment of shock sensitive components considered for possible seismic rectification. 

Keywords: Impact, shock, earthquakes, rocking, building contents, overturning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1786
378 Fast Wavelet Image Denoising Based on Local Variance and Edge Analysis

Authors: Gaoyong Luo

Abstract:

The approach based on the wavelet transform has been widely used for image denoising due to its multi-resolution nature, its ability to produce high levels of noise reduction and the low level of distortion introduced. However, by removing noise, high frequency components belonging to edges are also removed, which leads to blurring the signal features. This paper proposes a new method of image noise reduction based on local variance and edge analysis. The analysis is performed by dividing an image into 32 x 32 pixel blocks, and transforming the data into wavelet domain. Fast lifting wavelet spatial-frequency decomposition and reconstruction is developed with the advantages of being computationally efficient and boundary effects minimized. The adaptive thresholding by local variance estimation and edge strength measurement can effectively reduce image noise while preserve the features of the original image corresponding to the boundaries of the objects. Experimental results demonstrate that the method performs well for images contaminated by natural and artificial noise, and is suitable to be adapted for different class of images and type of noises. The proposed algorithm provides a potential solution with parallel computation for real time or embedded system application.

Keywords: Edge strength, Fast lifting wavelet, Image denoising, Local variance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1994