Search results for: Fuzzy Expert System Design
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12012

Search results for: Fuzzy Expert System Design

582 Utilization of Whey for the Production of β-Galactosidase Using Yeast and Fungal Culture

Authors: Rupinder Kaur, Parmjit S. Panesar, Ram S. Singh

Abstract:

Whey is the lactose rich by-product of the dairy industry, having good amount of nutrient reservoir. Most abundant nutrients are lactose, soluble proteins, lipids and mineral salts. Disposing of whey by most of milk plants which do not have proper pre-treatment system is the major issue. As a result of which, there can be significant loss of potential food and energy source. Thus, whey has been explored as the substrate for the synthesis of different value added products such as enzymes. β-galactosidase is one of the important enzymes and has become the major focus of research due to its ability to catalyze both hydrolytic as well as transgalactosylation reaction simultaneously. The enzyme is widely used in dairy industry as it catalyzes the transformation of lactose to glucose and galactose, making it suitable for the lactose intolerant people. The enzyme is intracellular in both bacteria and yeast, whereas for molds, it has an extracellular location. The present work was carried to utilize the whey for the production of β-galactosidase enzyme using both yeast and fungal cultures. The yeast isolate Kluyveromyces marxianus WIG2 and various fungal strains have been used in the present study. Different disruption techniques have also been investigated for the extraction of the enzyme produced intracellularly from yeast cells. Among the different methods tested for the disruption of yeast cells, SDS-chloroform showed the maximum β-galactosidase activity. In case of the tested fungal cultures, Aureobasidium pullulans NCIM 1050 was observed to be the maximum extracellular enzyme producer.

Keywords: β-galactosidase, fungus, yeast, whey.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5525
581 Identifying Knowledge Gaps in Incorporating Toxicity of Particulate Matter Constituents for Developing Regulatory Limits on Particulate Matter

Authors: Ananya Das, Arun Kumar, Gazala Habib, Vivekanandan Perumal

Abstract:

Regulatory bodies has proposed limits on Particulate Matter (PM) concentration in air; however, it does not explicitly indicate the incorporation of effects of toxicities of constituents of PM in developing regulatory limits. This study aimed to provide a structured approach to incorporate toxic effects of components in developing regulatory limits on PM. A four-step human health risk assessment framework consists of - (1) hazard identification (parameters: PM and its constituents and their associated toxic effects on health), (2) exposure assessment (parameters: concentrations of PM and constituents, information on size and shape of PM; fate and transport of PM and constituents in respiratory system), (3) dose-response assessment (parameters: reference dose or target toxicity dose of PM and its constituents), and (4) risk estimation (metric: hazard quotient and/or lifetime incremental risk of cancer as applicable). Then parameters required at every step were obtained from literature. Using this information, an attempt has been made to determine limits on PM using component-specific information. An example calculation was conducted for exposures of PM2.5 and its metal constituents from Indian ambient environment to determine limit on PM values. Identified data gaps were: (1) concentrations of PM and its constituents and their relationship with sampling regions, (2) relationship of toxicity of PM with its components.

Keywords: Air, component-specific toxicity, human health risks, particulate matter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1127
580 Oracle JDE Enterprise One ERP Implementation: A Case Study

Authors: Abhimanyu Pati, Krishna Kumar Veluri

Abstract:

The paper intends to bring out a real life experience encountered during actual implementation of a large scale Tier-1 Enterprise Resource Planning (ERP) system in a multi-location, discrete manufacturing organization in India, involved in manufacturing of auto components and aggregates. The business complexities, prior to the implementation of ERP, include multi-product with hierarchical product structures, geographically distributed multiple plant locations with disparate business practices, lack of inter-plant broadband connectivity, existence of disparate legacy applications for different business functions, and non-standardized codifications of products, machines, employees, and accounts apart from others. On the other hand, the manufacturing environment consisted of processes like Assemble-to-Order (ATO), Make-to-Stock (MTS), and Engineer-to-Order (ETO) with a mix of discrete and process operations. The paper has highlighted various business plan areas and concerns, prior to the implementation, with specific focus on strategic issues and objectives. Subsequently, it has dealt with the complete process of ERP implementation, starting from strategic planning, project planning, resource mobilization, and finally, the program execution. The step-by-step process provides a very good learning opportunity about the implementation methodology. At the end, various organizational challenges and lessons emerged, which will act as guidelines and checklist for organizations to successfully align and implement ERP and achieve their business objectives.

Keywords: ERP, ATO, MTS, ETO, discrete manufacturing, strategic planning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1750
579 Application of a Theoretical Framework as a Context for a Travel Behavior Change Policy Intervention

Authors: F. Moghtaderi, M. Burke, J. Troelsen

Abstract:

There has been a significant decline in active travel and a massive increase in the use of car dependent travel in many countries during the past two decades. Evidential risks for people’s physical and mental health problems are correlated with this increased use of motorized travel. These health related problems range from overweight and obesity to increased air pollution. In response to these rising concerns health professionals, traffic planers, local authorities and others have introduced a variety of initiatives to counterbalance the dominance of cars for daily journeys. However, the nature of travel behavior change interventions, which aim to reduce car use, are very complex and challenging regarding their interactions with human behavior. To change travel behavior at least two aspects have to be taken into consideration. First, how to alter attitudes and perceptions toward the sustainable and healthy modes of travel, in competition with experiences of private car use. And second, how to make these behavior change processes irreversible and sustainable. There are no comprehensive models available to guide policy interventions to increase the level of success of travel behavior change interventions across both these dimensions. A comprehensive theoretical framework is required in the effort to optimize how to facilitate and guide the processes of data collection and analysis to achieve the best possible guidelines for policy makers. Regarding the gaps in the travel behavior change research literature, this paper attempted to identify and suggest a multidimensional framework in order to facilitate planning the implemented travel behavior change interventions. A structured mixed-method model is suggested to improve the analytic power of the results according to the complexity of human behavior. In order to recognize people’s attitudes towards a specific travel mode, the Theory of Planned Behavior (TPB) was operationalized. But in order to capture decision making processes the Transtheoretical model of Behavior Change (TTM) was also used. Consequently, the combination of these two theories (TTM and TPB) has resulted in a synthesis with appropriate concepts to identify and design an implemented travel behavior change interventions.

Keywords: Behavior change theories, Theoretical framework, Travel behavior change interventions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2830
578 A Review of the Antecedents and Consequences of Employee Engagementc

Authors: Ibrahim Hamidu Magem

Abstract:

Employee engagement has continued to gain popularity among practitioners, consultants and academicians recent years. This is due to the fact that the engaged employees are central to organizational success in today’s highly competitive and rapidly changing business environment. Employee engagement depicts a situation whereby employee’s harnessed themselves to their work roles. The importance of employee engagement to organizations cannot be overemphasized in today’s rapidly changing business environment. Organizations both large and small are constantly striving to improve their performance, retain employees, reduce absenteeism, and create loyal customers among others. To be able to achieve these organizations need a team of highly engaged employees. In line with this, the study attempts to provide a valuable framework for understanding the antecedents and consequences of employee engagement in organizations. The paper categorizes the antecedents of employee engagement into individual and organizational factors which it is assumed that the existence of such factors could result into engaged employees that will be of benefit to organizations. Therefore, it is recommended that organizations should revisit and redesign its employee engagement system to enable them attain their organizational goals and objectives. In addition, organizations should note that engagement is personal but organizational engagement programmes should be about everyone in the organization. The findings from this paper adds to existing studies about employee engagement and also provide awareness to academics and practitioners about the importance of employee engagement to improve organizations efficiency and effectiveness, as well as to impact to overall firm performance.

Keywords: Antecedent, employee engagement, job involvement, organization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1485
577 Dynamic Analysis of a Moderately Thick Plate on Pasternak Type Foundation under Impact and Moving Loads

Authors: Neslihan Genckal, Reha Gursoy, Vedat Z. Dogan

Abstract:

In this study, dynamic responses of composite plates on elastic foundations subjected to impact and moving loads are investigated. The first order shear deformation (FSDT) theory is used for moderately thick plates. Pasternak-type (two-parameter) elastic foundation is assumed. Elastic foundation effects are integrated into the governing equations. It is assumed that plate is first hit by a mass as an impact type loading then the mass continues to move on the composite plate as a distributed moving loading, which resembles the aircraft landing on airport pavements. Impact and moving loadings are modeled by a mass-spring-damper system with a wheel. The wheel is assumed to be continuously in contact with the plate after impact. The governing partial differential equations of motion for displacements are converted into the ordinary differential equations in the time domain by using Galerkin’s method. Then, these sets of equations are solved by using the Runge-Kutta method. Several parameters such as vertical and horizontal velocities of the aircraft, volume fractions of the steel rebar in the reinforced concrete layer, and the different touchdown locations of the aircraft tire on the runway are considered in the numerical simulation. The results are compared with those of the ABAQUS, which is a commercial finite element code.

Keywords: Elastic foundation, impact, moving load, thick plate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1417
576 Effect of Infill Walls on Response of Multi Storey Reinforced Concrete Structure

Authors: Ayman Abd-Elhamed, Sayed Mahmoud

Abstract:

The present research work investigates the seismic response of reinforced concrete (RC) frame building considering the effect of modeling masonry infill (MI) walls. The seismic behavior of a residential 6-storey RC frame building, considering and ignoring the effect of masonry, is numerically investigated using response spectrum (RS) analysis. The considered herein building is designed as a moment resisting frame (MRF) system following the Egyptian code (EC) requirements. Two developed models in terms of bare frame and infill walls frame are used in the study. Equivalent diagonal strut methodology is used to represent the behavior of infill walls, whilst the well-known software package ETABS is used for implementing all frame models and performing the analysis. The results of the numerical simulations such as base shear, displacements, and internal forces for the bare frame as well as the infill wall frame are presented in a comparative way. The results of the study indicate that the interaction between infill walls and frames significantly change the responses of buildings during earthquakes compared to the results of bare frame building model. Specifically, the seismic analysis of RC bare frame structure leads to underestimation of base shear and consequently damage or even collapse of buildings may occur under strong shakings. On the other hand, considering infill walls significantly decrease the peak floor displacements and drifts in both X and Y-directions.

Keywords: Masonry infill, bare frame, response spectrum, seismic response.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3472
575 Development of Sustainable Building Environmental Model (SBEM) in Hong Kong

Authors: Kwok W. Mui, Ling T. Wong, F. Xiao, Chin T. Cheung, Ho C. Yu

Abstract:

This study addresses a concept of the Sustainable Building Environmental Model (SBEM) developed to optimize energy consumption in air conditioning and ventilation (ACV) systems without any deterioration of indoor environmental quality (IEQ). The SBEM incorporates two main components: an adaptive comfort temperature control module (ACT) and a new carbon dioxide demand control module (nDCV). These two modules take an innovative approach to maintain satisfaction of the Indoor Environmental Quality (IEQ) with optimum energy consumption; they provide a rational basis of effective control. A total of 2133 sets of measurement data of indoor air temperature (Ta), relative humidity (Rh) and carbon dioxide concentration (CO2) were conducted in some Hong Kong offices to investigate the potential of integrating the SBEM. A simulation was used to evaluate the dynamic performance of the energy and air conditioning system with the integration of the SBEM in an air-conditioned building. It allows us make a clear picture of the control strategies and performed any pre-tuned of controllers before utilized in real systems. With the integration of SBEM, it was able to save up to 12.3% in simulation of overall electricity consumption, and maintain the average carbon dioxide concentration within 1000ppm and occupant dissatisfaction in 20%. 

Keywords: —Sustainable building environmental model (SBEM), adaptive comfort temperature (ACT), new demand control ventilation (nDCV), energy saving.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1780
574 Damage to Strawberries Caused by Simulated Transport

Authors: G. La Scalia, M. Enea, R. Micale, O. Corona, L. Settanni

Abstract:

The quality and condition of perishable products delivered to the market and their subsequent selling prices are directly affected by the care taken during harvesting and handling. Mechanical injury, in fact, occurs at all stages, from pre-harvest operations through post-harvest handling, packing and transport to the market. The main implications of this damage are the reduction of the product’s quality and economical losses related to the shelf life diminution. For most perishable products, the shelf life is relatively short and it is typically dictated by microbial growth related to the application of dynamic and static loads during transportation. This paper presents the correlation between vibration levels and microbiological growth on strawberries and woodland strawberries and detects the presence of volatile organic compounds (VOC) in order to develop an intelligent logistic unit capable of monitoring VOCs using a specific sensor system. Fresh fruits were exposed to vibrations by means of a vibrating table in a temperature-controlled environment. Microbiological analyses were conducted on samples, taken at different positions along the column of the crates. The values obtained were compared with control samples not exposed to vibrations and the results show that different positions along the column influence the development of bacteria, yeasts and filamentous fungi.

Keywords: Microbiological analysis, shelf life, transport damage, volatile organic compounds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3078
573 Effect of Environmental Factors on Photoreactivation of Microorganisms under Indoor Conditions

Authors: Shirin Shafaei, James R. Bolton, Mohamed Gamal El Din

Abstract:

Ultraviolet (UV) disinfection causes damage to the DNA or RNA of microorganisms, but many microorganisms can repair this damage after exposure to near-UV or visible wavelengths (310–480 nm) by a mechanism called photoreactivation. Photoreactivation is gaining more attention because it can reduce the efficiency of UV disinfection of wastewater several hours after treatment. The focus of many photoreactivation research activities on the single species has caused a considerable lack in knowledge about complex natural communities of microorganisms and their response to UV treatment. In this research, photoreactivation experiments were carried out on the influent of the UV disinfection unit at a municipal wastewater treatment plant (WWTP) in Edmonton, Alberta after exposure to a Medium-Pressure (MP) UV lamp system to evaluate the effect of environmental factors on photoreactivation of microorganisms in the actual municipal wastewater. The effect of reactivation fluence, temperature, and river water on photoreactivation of total coliforms was examined under indoor conditions. The results showed that higher effective reactivation fluence values (up to 20 J/cm2) and higher temperatures (up to 25 °C) increased the photoreactivation of total coliforms. However, increasing the percentage of river in the mixtures of the effluent and river water decreased the photoreactivation of the mixtures. The results of this research can help the municipal wastewater treatment industry to examine the environmental effects of discharging their effluents into receiving waters.

Keywords: Photoreactivation, reactivation fluence, river water, temperature, ultraviolet disinfection, wastewater effluent.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1357
572 AJcFgraph - AspectJ Control Flow Graph Builder for Aspect-Oriented Software

Authors: Reza Meimandi Parizi, Abdul Azim Abdul Ghani

Abstract:

The ever-growing usage of aspect-oriented development methodology in the field of software engineering requires tool support for both research environments and industry. So far, tool support for many activities in aspect-oriented software development has been proposed, to automate and facilitate their development. For instance, the AJaTS provides a transformation system to support aspect-oriented development and refactoring. In particular, it is well established that the abstract interpretation of programs, in any paradigm, pursued in static analysis is best served by a high-level programs representation, such as Control Flow Graph (CFG). This is why such analysis can more easily locate common programmatic idioms for which helpful transformation are already known as well as, association between the input program and intermediate representation can be more closely maintained. However, although the current researches define the good concepts and foundations, to some extent, for control flow analysis of aspectoriented programs but they do not provide a concrete tool that can solely construct the CFG of these programs. Furthermore, most of these works focus on addressing the other issues regarding Aspect- Oriented Software Development (AOSD) such as testing or data flow analysis rather than CFG itself. Therefore, this study is dedicated to build an aspect-oriented control flow graph construction tool called AJcFgraph Builder. The given tool can be applied in many software engineering tasks in the context of AOSD such as, software testing, software metrics, and so forth.

Keywords: Aspect-Oriented Software Development, AspectJ, Control Flow Graph, Data Flow Analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2051
571 Shariah Views on the Components of Profit Rate in Al-Murabahah Asset Financing in Malaysian Islamic Bank

Authors: M. Pisol B Mat Isa, Asmak Ab Rahman, Hezlina Bt M Hashim, Abd Mutalib B Embong

Abstract:

Al-Murabahah is an Islamic financing facility used in asset financing, the profit rate of the contract is determined by components which are also being used in the conventional banking. Such are cost of fund, overhead cost, risk premium cost and bank-s profit margin. At the same time, the profit rate determined by Islamic banking system also refers to Inter-Bank Offered Rate (LIBOR) in London as a benchmark. This practice has risen arguments among Muslim scholars in term of its validity of the contract; whether the contract maintains the Shariah compliance or not. This paper aims to explore the view of Shariah towards the above components practiced by Islamic Banking in determining the profit rate of al-murabahah asset financing in Malaysia. This is a comparative research which applied the views of Muslim scholars from all major mazahibs in Islamic jurisprudence and examined the practices by Islamic banks in Malaysia for the above components. The study found that the shariah accepts all the components with conditions. The cost of fund is accepted as a portion of al-mudarabah-s profit, the overhead cost is accepted as a cost of product, risk premium cost consist of business risk and mitigation risk are accepted through the concept of alta-awun and bank-s profit margin is accepted as a right of bank after venturing in risky investment.

Keywords: Islamic banking, Islamic finance, al-murabahah and asset financing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5848
570 Empirical Roughness Progression Models of Heavy Duty Rural Pavements

Authors: Nahla H. Alaswadko, Rayya A. Hassan, Bayar N. Mohammed

Abstract:

Empirical deterministic models have been developed to predict roughness progression of heavy duty spray sealed pavements for a dataset representing rural arterial roads. The dataset provides a good representation of the relevant network and covers a wide range of operating and environmental conditions. A sample with a large size of historical time series data for many pavement sections has been collected and prepared for use in multilevel regression analysis. The modelling parameters include road roughness as performance parameter and traffic loading, time, initial pavement strength, reactivity level of subgrade soil, climate condition, and condition of drainage system as predictor parameters. The purpose of this paper is to report the approaches adopted for models development and validation. The study presents multilevel models that can account for the correlation among time series data of the same section and to capture the effect of unobserved variables. Study results show that the models fit the data very well. The contribution and significance of relevant influencing factors in predicting roughness progression are presented and explained. The paper concludes that the analysis approach used for developing the models confirmed their accuracy and reliability by well-fitting to the validation data.

Keywords: Roughness progression, empirical model, pavement performance, heavy duty pavement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 744
569 ECG Based Reliable User Identification Using Deep Learning

Authors: R. N. Begum, Ambalika Sharma, G. K. Singh

Abstract:

Identity theft has serious ramifications beyond data and personal information loss. This necessitates the implementation of robust and efficient user identification systems. Therefore, automatic biometric recognition systems are the need of the hour, and electrocardiogram (ECG)-based systems are unquestionably the best choice due to their appealing inherent characteristics. The Convolutional Neural Networks (CNNs) are the recent state-of-the-art techniques for ECG-based user identification systems. However, the results obtained are significantly below standards, and the situation worsens as the number of users and types of heartbeats in the dataset grows. As a result, this study proposes a highly accurate and resilient ECG-based person identification system using CNN's dense learning framework. The proposed research explores explicitly the caliber of dense CNNs in the field of ECG-based human recognition. The study tests four different configurations of dense CNN which are trained on a dataset of recordings collected from eight popular ECG databases. With the highest False Acceptance Rate (FAR)  of 0.04% and the highest False Rejection Rate (FRR)  of 5%, the best performing network achieved an identification accuracy of 99.94%. The best network is also tested with various train/test split ratios. The findings show that DenseNets are not only extremely reliable, but also highly efficient. Thus, they might also be implemented in real-time ECG-based human recognition systems.

Keywords: Biometrics, dense networks, identification rate, train/test split ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 456
568 X-Ray Intensity Measurement Using Frequency Output Sensor for Computed Tomography

Authors: R. M. Siddiqui, D. Z. Moghaddam, T. R. Turlapati, S. H. Khan, I. Ul Ahad

Abstract:

Quality of 2D and 3D cross-sectional images produce by Computed Tomography primarily depend upon the degree of precision of primary and secondary X-Ray intensity detection. Traditional method of primary intensity detection is apt to errors. Recently the X-Ray intensity measurement system along with smart X-Ray sensors is developed by our group which is able to detect primary X-Ray intensity unerringly. In this study a new smart X-Ray sensor is developed using Light-to-Frequency converter TSL230 from Texas Instruments which has numerous advantages in terms of noiseless data acquisition and transmission. TSL230 construction is based on a silicon photodiode which converts incoming X-Ray radiation into the proportional current signal. A current to frequency converter is attached to this photodiode on a single monolithic CMOS integrated circuit which provides proportional frequency count to incoming current signal in the form of the pulse train. The frequency count is delivered to the center of PICDEM FS USB board with PIC18F4550 microcontroller mounted on it. With highly compact electronic hardware, this Demo Board efficiently read the smart sensor output data. The frequency output approaches overcome nonlinear behavior of sensors with analog output thus un-attenuated X-Ray intensities could be measured precisely and better normalization could be acquired in order to attain high resolution.

Keywords: Computed tomography, detector technology, X-Ray intensity measurement

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2557
567 Vision-Based Daily Routine Recognition for Healthcare with Transfer Learning

Authors: Bruce X. B. Yu, Yan Liu, Keith C. C. Chan

Abstract:

We propose to record Activities of Daily Living (ADLs) of elderly people using a vision-based system so as to provide better assistive and personalization technologies. Current ADL-related research is based on data collected with help from non-elderly subjects in laboratory environments and the activities performed are predetermined for the sole purpose of data collection. To obtain more realistic datasets for the application, we recorded ADLs for the elderly with data collected from real-world environment involving real elderly subjects. Motivated by the need to collect data for more effective research related to elderly care, we chose to collect data in the room of an elderly person. Specifically, we installed Kinect, a vision-based sensor on the ceiling, to capture the activities that the elderly subject performs in the morning every day. Based on the data, we identified 12 morning activities that the elderly person performs daily. To recognize these activities, we created a HARELCARE framework to investigate into the effectiveness of existing Human Activity Recognition (HAR) algorithms and propose the use of a transfer learning algorithm for HAR. We compared the performance, in terms of accuracy, and training progress. Although the collected dataset is relatively small, the proposed algorithm has a good potential to be applied to all daily routine activities for healthcare purposes such as evidence-based diagnosis and treatment.

Keywords: Daily activity recognition, healthcare, IoT sensors, transfer learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 814
566 Resilient Manufacturing: Use of Augmented Reality to Advance Training and Operating Practices in Manual Assembly

Authors: L. C. Moreira, M. Kauffman

Abstract:

This paper outlines the results of an experimental research on deploying an emerging augmented reality (AR) system for real-time task assistance (or work instructions) of highly customised and high-risk manual operations. The focus is on human operators’ training effectiveness and performance and the aim is to test if such technologies can support enhancing the knowledge retention levels and accuracy of task execution to improve health and safety (H&S). An AR enhanced assembly method is proposed and experimentally tested using a real industrial process as case study for electric vehicles’ (EV) battery module assembly. The experimental results revealed that the proposed method improved the training practices and performance through increases in the knowledge retention levels from 40% to 84%, and accuracy of task execution from 20% to 71%, when compared to the traditional paper-based method. The results of this research validate and demonstrate how emerging technologies are advancing the choice for manual, hybrid or fully automated processes by promoting the XR-assisted processes, and the connected worker (a vision for Industry 4 and 5.0), and supporting manufacturing become more resilient in times of constant market changes.

Keywords: Augmented reality, extended reality, connected worker, XR-assisted operator, manual assembly 4.0, industry 5.0, smart training, battery assembly.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 321
565 Performance Analysis and Optimization for Diagonal Sparse Matrix-Vector Multiplication on Machine Learning Unit

Authors: Qiuyu Dai, Haochong Zhang, Xiangrong Liu

Abstract:

Efficient matrix-vector multiplication with diagonal sparse matrices is pivotal in a multitude of computational domains, ranging from scientific simulations to machine learning workloads. When encoded in the conventional Diagonal (DIA) format, these matrices often induce computational overheads due to extensive zero-padding and non-linear memory accesses, which can hamper the computational throughput, and elevate the usage of precious compute and memory resources beyond necessity. The ’DIA-Adaptive’ approach, a methodological enhancement introduced in this paper, confronts these challenges head-on by leveraging the advanced parallel instruction sets embedded within Machine Learning Units (MLUs). This research presents a thorough analysis of the DIA-Adaptive scheme’s efficacy in optimizing Sparse Matrix-Vector Multiplication (SpMV) operations. The scope of the evaluation extends to a variety of hardware architectures, examining the repercussions of distinct thread allocation strategies and cluster configurations across multiple storage formats. A dedicated computational kernel, intrinsic to the DIA-Adaptive approach, has been meticulously developed to synchronize with the nuanced performance characteristics of MLUs. Empirical results, derived from rigorous experimentation, reveal that the DIA-Adaptive methodology not only diminishes the performance bottlenecks associated with the DIA format but also exhibits pronounced enhancements in execution speed and resource utilization. The analysis delineates a marked improvement in parallelism, showcasing the DIA-Adaptive scheme’s ability to adeptly manage the interplay between storage formats, hardware capabilities, and algorithmic design. The findings suggest that this approach could set a precedent for accelerating SpMV tasks, thereby contributing significantly to the broader domain of high-performance computing and data-intensive applications.

Keywords: Adaptive method, DIA, diagonal sparse matrices, MLU, sparse matrix-vector multiplication.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 128
564 Study on Optimization of Air Infiltration at Entrance of a Commercial Complex in Zhejiang Province

Authors: Yujie Zhao, Jiantao Weng

Abstract:

In the past decade, with the rapid development of China's economy, the purchasing power and physical demand of residents have been improved, which results in the vast emergence of public buildings like large shopping malls. However, the architects usually focus on the internal functions and streamlines of these buildings, ignoring the impact of the environment on the subjective feelings of building users. Only in Zhejiang province, the infiltration of cold air in winter frequently occurs at the entrance of sizeable commercial complex buildings that have been in operation, which will affect the environmental comfort of the building lobby and internal public spaces. At present, to reduce these adverse effects, it is usually adopted to add active equipment, such as setting air curtains to block air exchange or adding heating air conditioners. From the perspective of energy consumption, the infiltration of cold air into the entrance will increase the heat consumption of indoor heating equipment, which will indirectly cause considerable economic losses during the whole winter heating stage. Therefore, it is of considerable significance to explore the suitable entrance forms for improving the environmental comfort of commercial buildings and saving energy. In this paper, a commercial complex with apparent cold air infiltration problem in Hangzhou is selected as the research object to establish a model. The environmental parameters of the building entrance, including temperature, wind speed, and infiltration air volume, are obtained by Computational Fluid Dynamics (CFD) simulation, from which the heat consumption caused by the natural air infiltration in the winter and its potential economic loss is estimated as the objective metric. This study finally obtains the optimization direction of the building entrance form of the commercial complex by comparing the simulation results of other local commercial complex projects with different entrance forms. The conclusions will guide the entrance design of the same type of commercial complex in this area.

Keywords: Air infiltration, commercial complex, heat consumption, CFD simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 707
563 A Novel Machining Signal Filtering Technique: Z-notch Filter

Authors: Nuawi M. Z., Lamin F., Ismail A. R., Abdullah S., Wahid Z.

Abstract:

A filter is used to remove undesirable frequency information from a dynamic signal. This paper shows that the Znotch filter filtering technique can be applied to remove the noise nuisance from a machining signal. In machining, the noise components were identified from the sound produced by the operation of machine components itself such as hydraulic system, motor, machine environment and etc. By correlating the noise components with the measured machining signal, the interested components of the measured machining signal which was less interfered by the noise, can be extracted. Thus, the filtered signal is more reliable to be analysed in terms of noise content compared to the unfiltered signal. Significantly, the I-kaz method i.e. comprises of three dimensional graphical representation and I-kaz coefficient, Z∞ could differentiate between the filtered and the unfiltered signal. The bigger space of scattering and the higher value of Z∞ demonstrated that the signal was highly interrupted by noise. This method can be utilised as a proactive tool in evaluating the noise content in a signal. The evaluation of noise content is very important as well as the elimination especially for machining operation fault diagnosis purpose. The Z-notch filtering technique was reliable in extracting noise component from the measured machining signal with high efficiency. Even though the measured signal was exposed to high noise disruption, the signal generated from the interaction between cutting tool and work piece still can be acquired. Therefore, the interruption of noise that could change the original signal feature and consequently can deteriorate the useful sensory information can be eliminated.

Keywords: Digital signal filtering, I-kaz method, Machiningmonitoring, Noise Cancelling, Sound

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1836
562 Using 3-Glycidoxypropyltrimethoxysilane Functionalized SiO2 Nanoparticles to Improve Flexural Properties of Glass Fibers/Epoxy Grid-Stiffened Composite Panels

Authors: Reza Eslami-Farsani, Hamed Khosravi, Saba Fayazzadeh

Abstract:

Lightweight and efficient structures have the aim to enhance the efficiency of the components in various industries. Toward this end, composites are one of the most widely used materials because of durability, high strength and modulus, and low weight. One type of the advanced composites is grid-stiffened composite (GSC) structures, which have been extensively considered in aerospace, automotive, and aircraft industries. They are one of the top candidates for replacing some of the traditional components, which are used here. Although there are a good number of published surveys on the design aspects and fabrication of GSC structures, little systematic work has been reported on their material modification to improve their properties, to our knowledge. Matrix modification using nanoparticles is an effective method to enhance the flexural properties of the fibrous composites. In the present study, a silanecoupling agent (3-glycidoxypropyltrimethoxysilane/3-GPTS) was introduced onto the silica (SiO2) nanoparticle surface and its effects on the three-point flexural response of isogrid E-glass/epoxy composites were assessed. Based on the Fourier Transform Infrared Spectrometer (FTIR) spectra, it was inferred that the 3-GPTS coupling agent was successfully grafted onto the surface of SiO2 nanoparticles after modification. Flexural test revealed an improvement of 16%, 14%, and 36% in stiffness, maximum load and energy absorption of the isogrid specimen filled with 3 wt.% 3- GPTS/SiO2 compared to the neat one. It would be worth mentioning that in these structures, considerable energy absorption was observed after the primary failure related to the load peak. In addition, 3- GPTMS functionalization had a positive effect on the flexural behavior of the multiscale isogrid composites. In conclusion, this study suggests that the addition of modified silica nanoparticles is a promising method to improve the flexural properties of the gridstiffened fibrous composite structures.

Keywords: Isogrid-stiffened composite panels, silica nanoparticles, surface modification, flexural properties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2946
561 HPTLC Fingerprint Profiling of Protorhus longifolia Methanolic Leaf Extract and Qualitative Analysis of Common Biomarkers

Authors: P. S. Seboletswe, Z. Mkhize, L. M. Katata-Seru

Abstract:

Protorhus longifolia is known as a medicinal plant that has been used traditionally to treat various ailments such as hemiplegic paralysis, blood clotting related diseases, diarrhoea, heartburn, etc. The study reports a High-Performance Thin Layer Chromatography (HPTLC) fingerprint profile of Protorhus longifolia methanolic extract and its qualitative analysis of gallic acid, rutin, and quercetin. HPTLC analysis was achieved using CAMAG HPTLC system equipped with CAMAG automatic TLC sampler 4, CAMAG Automatic Developing Chamber 2 (ADC2), CAMAG visualizer 2, CAMAG Thin Layer Chromatography (TLC) scanner and visionCATS CAMAG HPTLC software. Mobile phase comprising toluene, ethyl acetate, formic acid (21:15:3) was used for qualitative analysis of gallic acid and revealed eight peaks while the mobile phase containing ethyl acetate, water, glacial acetic acid, formic acid (100:26:11:11) for qualitative analysis of rutin and quercetin revealed six peaks. HPTLC sillica gel 60 F254 glass plates (10 × 10) were used as the stationary phase. Gallic acid was detected at the Rf = 0.35; while rutin and quercetin were not evident in the extract. Further studies will be performed to quantify gallic acid in Protorhus longifolia leaves and also identify other biomarkers.

Keywords: Biomarkers, fingerprint profiling, gallic acid, HPTLC, Protorhus longifolia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 784
560 Facility Location Selection using Preference Programming

Authors: C. Ardil

Abstract:

This paper presents preference programming technique based multiple criteria decision making analysis for selecting a facility location for a new organization or expansion of an existing facility which is of vital importance for a decision support system and strategic planning process. The implementation of decision support systems is considered crucial to sustain competitive advantage and profitability persistence in turbulent environment. As an effective strategic management and decision making is necessary, multiple criteria decision making analysis supports the decision makers to formulate and implement the right strategy. The investment cost associated with acquiring the property and facility construction makes the facility location selection problem a long-term strategic investment decision, which rationalize the best location selection which results in higher economic benefits through increased productivity and optimal distribution network. Selecting the proper facility location from a given set of alternatives is a difficult task, as many potential qualitative and quantitative multiple conflicting criteria are to be considered. This paper solves a facility location selection problem using preference programming, which is an effective multiple criteria decision making analysis tool applied to deal with complex decision problems in the operational research environment. The ranking results of preference programming are compared with WSM, TOPSIS and VIKOR methods.

Keywords: Facility Location Selection, Multiple Criteria Decision Making, Multiple Criteria Decision Making Analysis, Preference Programming, Location Selection, WSM, TOPSIS, VIKOR

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 462
559 Enhanced Efficacy of Kinetic Power Transform for High-Speed Wind Field

Authors: Nan-Chyuan Tsai, Chao-Wen Chiang, Bai-Lu Wang

Abstract:

The three-time-scale plant model of a wind power generator, including a wind turbine, a flexible vertical shaft, a Variable Inertia Flywheel (VIF) module, an Active Magnetic Bearing (AMB) unit and the applied wind sequence, is constructed. In order to make the wind power generator be still able to operate as the spindle speed exceeds its rated speed, the VIF is equipped so that the spindle speed can be appropriately slowed down once any stronger wind field is exerted. To prevent any potential damage due to collision by shaft against conventional bearings, the AMB unit is proposed to regulate the shaft position deviation. By singular perturbation order-reduction technique, a lower-order plant model can be established for the synthesis of feedback controller. Two major system parameter uncertainties, an additive uncertainty and a multiplicative uncertainty, are constituted by the wind turbine and the VIF respectively. Frequency Shaping Sliding Mode Control (FSSMC) loop is proposed to account for these uncertainties and suppress the unmodeled higher-order plant dynamics. At last, the efficacy of the FSSMC is verified by intensive computer and experimental simulations for regulation on position deviation of the shaft and counter-balance of unpredictable wind disturbance.

Keywords: Sliding Mode Control, Singular Perturbation, Variable Inertia Flywheel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1412
558 Unattended Crowdsensing Method to Monitor the Quality Condition of Dirt Roads

Authors: Matías Micheletto, Rodrigo Santos, Sergio F. Ochoa

Abstract:

In developing countries, most roads in rural areas are dirt road. They require frequent maintenance since they are affected by erosive events, such as rain or wind, and the transit of heavy-weight trucks and machinery. Early detection of damages on the road condition is a key aspect, since it allows to reduce the maintenance time and cost, and also the limitations for other vehicles to travel through. Most proposals that help address this problem require the explicit participation of drivers, a permanent internet connection, or important instrumentation in vehicles or roads. These constraints limit the suitability of these proposals when applied into developing regions, like Latin America. This paper proposes an alternative method, based on unattended crowdsensing, to determine the quality of dirt roads in rural areas. This method involves the use of a mobile application that complements the road condition surveys carried out by organizations in charge of the road network maintenance, giving them early warnings about road areas that could be requiring maintenance. Drivers can also take advantage of the early warnings while they move through these roads. The method was evaluated using information from a public dataset. Although they are preliminary, the results indicate the proposal is potentially suitable to provide awareness about dirt roads condition to drivers, transportation authority and road maintenance companies.

Keywords: Dirt roads automatic quality assessment, collaborative system, unattended crowdsensing method, roads quality awareness provision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 449
557 Removal of Volatile Organic Compounds from Contaminated Surfactant Solution using Co-Curren Vacuum Stripping

Authors: Pornchai Suriya-Amrit, Suratsawadee Kungsanant, Boonyarach Kitiyanan

Abstract:

There has been a growing interest in utilizing surfactants in remediation processes to separate the hydrophobic volatile organic compounds (HVOCs) from aqueous solution. One attractive process is cloud point extraction (CPE), which utilizes nonionic surfactants as a separating agent. Since the surfactant cost is a key determination of the economic viability of the process, it is important that the surfactants are recycled and reused. This work aims to study the performance of the co-current vacuum stripping using a packed column for HVOCs removal from contaminated surfactant solution. Six types HVOCs are selected as contaminants. The studied surfactant is the branched secondary alcohol ethoxylates (AEs), Tergitol TMN-6 (C14H30O2). The volatility and the solubility of HVOCs in surfactant system are determined in terms of an apparent Henry’s law constant and a solubilization constant, respectively. Moreover, the HVOCs removal efficiency of vacuum stripping column is assessed in terms of percentage of HVOCs removal and the overall liquid phase volumetric mass transfer coefficient. The apparent Henry’s law constant of benzenz , toluene, and ethyl benzene were 7.00×10-5, 5.38×10-5, 3.35× 10-5 respectively. The solubilization constant of benzene, toluene, and ethyl benzene were 1.71, 2.68, 7.54 respectively. The HVOCs removal for all solute were around 90 percent.

Keywords: Apparent Henry’s law constant, Branched secondary alcohol ethoxylates, Vacuum Stripping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1633
556 Assessment of the Vulnerability and Risk of Climate Change on Water Supply and Demand in Taijiang Area

Authors: Yu-Chen Lin, Tzong-Yeang Lee, Hung-Chih Shih

Abstract:

The development of sustainable utilization water resources is crucial. The ecological environment and water resources systems form the foundation of the existence and development of the social economy. The urban ecological support system depends on these resources as well. This research studies the vulnerability, criticality, and risk of climate change on water supply and demand in the main administrative district of the Taijiang Area (Tainan City). Based on the two situations set in this paper and various factors (indexes), this research adopts two kinds of weights (equal and AHP) to conduct the calculation and establish the water supply and demand risk map for the target year 2039. According to the risk analysis result, which is based on equal weight, only one district belongs to a high-grade district (Grade 4). Based on the AHP weight, 16 districts belong to a high-grade or higher-grade district (Grades 4 and 5), and from among them, two districts belong to the highest grade (Grade 5). These results show that the risk level of water supply and demand in cities is higher than that in towns. The government generally gives more attention to the adjustment strategy in the “cities." However, it should also provide proper adjustment strategies for the “towns" to be able to cope with the risks of water supply and demand.

Keywords: Climate change, risk, vulnerability, water supply and demand.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 57090
555 Adjustment of a PET Scanner for PEPT

Authors: Alireza Sadrmomtaz

Abstract:

Positron emission particle tracking (PEPT) is a technique in which a single radioactive tracer particle can be accurately tracked as it moves. A limitation of PET is that in order to reconstruct a tomographic image it is necessary to acquire a large volume of data (millions of events), so it is difficult to study rapidly changing systems. By considering this fact, PEPT is a very fast process compared with PET. In PEPT detecting both photons defines a line and the annihilation is assumed to have occurred somewhere along this line. The location of the tracer can be determined to within a few mm from coincident detection of a small number of pairs of back-to-back gamma rays and using triangulation. This can be achieved many times per second and the track of a moving particle can be reliably followed. This technique was invented at the University of Birmingham [1]. The attempt in PEPT is not to form an image of the tracer particle but simply to determine its location with time. If this tracer is followed for a long enough period within a closed, circulating system it explores all possible types of motion. The application of PEPT to industrial process systems carried out at the University of Birmingham is categorized in two subjects: the behaviour of granular materials and viscous fluids. Granular materials are processed in industry for example in the manufacture of pharmaceuticals, ceramics, food, polymers and PEPT has been used in a number of ways to study the behaviour of these systems [2]. PEPT allows the possibility of tracking a single particle within the bed [3]. Also PEPT has been used for studying systems such as: fluid flow, viscous fluids in mixers [4], using a neutrally-buoyant tracer particle [5].

Keywords: PET, BGO, Particle Tracking, ECAT 931, List mode, PEPT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1361
554 Ice Load Measurements on Known Structures Using Image Processing Methods

Authors: Azam Fazelpour, Saeed R. Dehghani, Vlastimil Masek, Yuri S. Muzychka

Abstract:

This study employs a method based on image analyses and structure information to detect accumulated ice on known structures. The icing of marine vessels and offshore structures causes significant reductions in their efficiency and creates unsafe working conditions. Image processing methods are used to measure ice loads automatically. Most image processing methods are developed based on captured image analyses. In this method, ice loads on structures are calculated by defining structure coordinates and processing captured images. A pyramidal structure is designed with nine cylindrical bars as the known structure of experimental setup. Unsymmetrical ice accumulated on the structure in a cold room represents the actual case of experiments. Camera intrinsic and extrinsic parameters are used to define structure coordinates in the image coordinate system according to the camera location and angle. The thresholding method is applied to capture images and detect iced structures in a binary image. The ice thickness of each element is calculated by combining the information from the binary image and the structure coordinate. Averaging ice diameters from different camera views obtains ice thicknesses of structure elements. Comparison between ice load measurements using this method and the actual ice loads shows positive correlations with an acceptable range of error. The method can be applied to complex structures defining structure and camera coordinates.

Keywords: Camera calibration, Ice detection, ice load measurements, image processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1205
553 The Role of Chemerin and Myostatin after Physical Activity

Authors: M. J. Pourvaghar, M. E. Bahram

Abstract:

Obesity and overweight is one of the most common metabolic disorders in industrialized countries and in developing countries. One consequence of pathological obesity is cardiovascular disease and metabolic syndrome. Chemerin is an adipocyne that plays a role in the regulation of the adipocyte function and the metabolism of glucose in the liver and musculoskeletal system. Most likely, chemerin is involved in obesity-related disorders such as type 2 diabetes and cardiovascular disease. Aerobic exercises reduce the level of chemerin and cause macrophage penetration into fat cells and inflammatory factors. Several efforts have been made to clarify the cellular and molecular mechanisms of hypertrophy and muscular atrophy. Myostatin, a new member of the TGF-β family, is a transforming growth factor β that its expression negatively regulates the growth of the skeletal muscle; and the increase of this hormone has been observed in conditions of muscular atrophy. While in response to muscle overload, its levels decrease after the atrophy period, TGF-β is the most important cytokine in the development of skeletal muscle. Myostatin plays an important role in muscle control, and animal and human studies show a negative role of myostatin in the growth of skeletal muscle. Separation of myostatin from Golgi begins on the ninth day of the onset period and continues until birth at all times of muscle growth. Higher levels of myostatin are found in obese people. Resistance training for 10 weeks could reduce levels of plasma myostatin.

Keywords: Chemerin, myostatin, obesity, physical activity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 719