Search results for: wind seed measurement
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4426

Search results for: wind seed measurement

1006 Three Dimensional Large Eddy Simulation of Blood Flow and Deformation in an Elastic Constricted Artery

Authors: Xi Gu, Guan Heng Yeoh, Victoria Timchenko

Abstract:

In the current work, a three-dimensional geometry of a 75% stenosed blood vessel is analysed. Large eddy simulation (LES) with the help of a dynamic subgrid scale Smagorinsky model is applied to model the turbulent pulsatile flow. The geometry, the transmural pressure and the properties of the blood and the elastic boundary were based on clinical measurement data. For the flexible wall model, a thin solid region is constructed around the 75% stenosed blood vessel. The deformation of this solid region was modelled as a deforming boundary to reduce the computational cost of the solid model. Fluid-structure interaction is realised via a two-way coupling between the blood flow modelled via LES and the deforming vessel. The information of the flow pressure and the wall motion was exchanged continually during the cycle by an arbitrary lagrangian-eulerian method. The boundary condition of current time step depended on previous solutions. The fluctuation of the velocity in the post-stenotic region was analysed in the study. The axial velocity at normalised position Z=0.5 shows a negative value near the vessel wall. The displacement of the elastic boundary was concerned in this study. In particular, the wall displacement at the systole and the diastole were compared. The negative displacement at the stenosis indicates a collapse at the maximum velocity and the deceleration phase.

Keywords: Large Eddy Simulation, Fluid Structural Interaction, constricted artery, Computational Fluid Dynamics

Procedia PDF Downloads 284
1005 Co-Administration Effects of Conjugated Linoleic Acid and L-Carnitine on Weight Gain and Biochemical Profile in Diet Induced Obese Rats

Authors: Maryam Nazari, Majid Karandish, Alihossein Saberi

Abstract:

Obesity as a global health challenge motivates pharmaceutical industries to produce anti-obesity drugs. However, effectiveness of these agents is remained unclear. Because of popularity of dietary supplements, the aim of this study was tp investigate the effects of Conjugated Linoleic Acid (CLA) and L-carnitine (LC) on serum glucose, triglyceride, cholesterol and weight changes in diet induced obese rats. 48 male Wistar rats were randomly divided into two groups: Normal fat diet (n=8), and High fat diet (HFD) (n=32). After eight weeks, the second group which was maintained on HFD until the end of study, was subdivided into four categories: a) 500 mg Corn Oil (as control group), b) 500 mg CLA, c) 200 mg LC, d) 500 mg CLA+ 200 mg LC.All doses are planned per kg body weights, which were administered by oral gavage for four weeks. Body weights were measured and recorded weekly by means of a digital scale. At the end of the study, blood samples were collected for biochemical markers measurement. SPSS Version 16 was used for statistical analysis. At the end of 8th week, a significant difference in weight was observed between HFD and NFD group. After 12 weeks, LC significantly reduced weight gain by 4.2%. Trend of weight gain in CLA and CLA+LC groups was insignificantly decelerated. CLA+LC reduced triglyceride level significantly, but just CLA had significant influence on total cholesterol and insignificant decreasing effect on FBS. Our results showed that an obesogenic diet in a relative short time led to obesity and dyslipidemia which can be modified by LC and CLA to some extent.

Keywords: conjugated linoleic acid, high fat diet, L-Carnitine, obesity

Procedia PDF Downloads 143
1004 A Robust Spatial Feature Extraction Method for Facial Expression Recognition

Authors: H. G. C. P. Dinesh, G. Tharshini, M. P. B. Ekanayake, G. M. R. I. Godaliyadda

Abstract:

This paper presents a new spatial feature extraction method based on principle component analysis (PCA) and Fisher Discernment Analysis (FDA) for facial expression recognition. It not only extracts reliable features for classification, but also reduces the feature space dimensions of pattern samples. In this method, first each gray scale image is considered in its entirety as the measurement matrix. Then, principle components (PCs) of row vectors of this matrix and variance of these row vectors along PCs are estimated. Therefore, this method would ensure the preservation of spatial information of the facial image. Afterwards, by incorporating the spectral information of the eigen-filters derived from the PCs, a feature vector was constructed, for a given image. Finally, FDA was used to define a set of basis in a reduced dimension subspace such that the optimal clustering is achieved. The method of FDA defines an inter-class scatter matrix and intra-class scatter matrix to enhance the compactness of each cluster while maximizing the distance between cluster marginal points. In order to matching the test image with the training set, a cosine similarity based Bayesian classification was used. The proposed method was tested on the Cohn-Kanade database and JAFFE database. It was observed that the proposed method which incorporates spatial information to construct an optimal feature space outperforms the standard PCA and FDA based methods.

Keywords: facial expression recognition, principle component analysis (PCA), fisher discernment analysis (FDA), eigen-filter, cosine similarity, bayesian classifier, f-measure

Procedia PDF Downloads 411
1003 Design and Performance Improvement of Three-Dimensional Optical Code Division Multiple Access Networks with NAND Detection Technique

Authors: Satyasen Panda, Urmila Bhanja

Abstract:

In this paper, we have presented and analyzed three-dimensional (3-D) matrices of wavelength/time/space code for optical code division multiple access (OCDMA) networks with NAND subtraction detection technique. The 3-D codes are constructed by integrating a two-dimensional modified quadratic congruence (MQC) code with one-dimensional modified prime (MP) code. The respective encoders and decoders were designed using fiber Bragg gratings and optical delay lines to minimize the bit error rate (BER). The performance analysis of the 3D-OCDMA system is based on measurement of signal to noise ratio (SNR), BER and eye diagram for a different number of simultaneous users. Also, in the analysis, various types of noises and multiple access interference (MAI) effects were considered. The results obtained with NAND detection technique were compared with those obtained with OR and AND subtraction techniques. The comparison results proved that the NAND detection technique with 3-D MQC\MP code can accommodate more number of simultaneous users for longer distances of fiber with minimum BER as compared to OR and AND subtraction techniques. The received optical power is also measured at various levels of BER to analyze the effect of attenuation.

Keywords: Cross Correlation (CC), Three dimensional Optical Code Division Multiple Access (3-D OCDMA), Spectral Amplitude Coding Optical Code Division Multiple Access (SAC-OCDMA), Multiple Access Interference (MAI), Phase Induced Intensity Noise (PIIN), Three Dimensional Modified Quadratic Congruence/Modified Prime (3-D MQC/MP) code

Procedia PDF Downloads 401
1002 The Effects of Menstrual Phase on Upper and Lower Body Anaerobic Performance in College-Aged Women

Authors: Kelsey Scanlon

Abstract:

Introduction: With the rate of female collegiate and professional athletes on the rise in recent decades, fluctuations in physical performance in relation to the menstrual cycle is an important area of study. PURPOSE: The purpose of this research was to compare differences in upper and lower body maximal anaerobic capacities across a single menstrual cycle. Methode: Participants (n=11) met a total of four times; once for familiarization and again on day 1 of menses (follicular phase), day 14 (ovulation), and day 21 (luteal phase) respectively. Upper body power was assessed using a bench press weight of ~50% of the participant’s predetermined 1-repetition maximum (1-RM) on a ballistic measurement system and variables included peak force (N), mean force (N), peak power (W), mean power (W), and peak velocity (m/s). Lower body power output was collected using a standard Wingate test. The variables of interest were anaerobic capacity (w/kg), peak power (W), mean power (W), fatigue index (W/s), and total work (J). Result: Statistical significance was not observed (p > 0.05) in any of the aforementioned variables after completing multiple one ways of analyses of variances (ANOVAs) with repeated measures on time. Conclusion: Within the parameters of this research, neither female upper nor lower body power output differed across the menstrual cycle when analyzed using 50% of one repetition (1RM) maximal bench press and the 30-second maximal effort cycle ergometer Wingate test. Therefore, researchers should not alter their subject populations due to the incorrect assumption that power output may be influenced by the menstrual cycle.

Keywords: anaerobic, athlete, female, power

Procedia PDF Downloads 137
1001 Triose Phosphate Utilisation at the (Sub)Foliar Scale Is Modulated by Whole-plant Source-sink Ratios and Nitrogen Budgets in Rice

Authors: Zhenxiang Zhou

Abstract:

The triose phosphate utilisation (TPU) limitation to leaf photosynthesis is a biochemical process concerning the sub-foliar carbon sink-source (im)balance, in which photorespiration-associated amino acids exports provide an additional outlet for carbon and increases leaf photosynthetic rate. However, whether this process is regulated by whole-plant sink-source relations and nitrogen budgets remains unclear. We address this question by model analyses of gas-exchange data measured on leaves at three growth stages of rice plants grown at two-nitrogen levels, where three means (leaf-colour modification, adaxial vs abaxial measurements, and panicle pruning) were explored to alter source-sink ratios. Higher specific leaf nitrogen (SLN) resulted in higher rates of TPU and also led to the TPU limitation occurring at a lower intercellular CO2 concentration. Photorespiratory nitrogen assimilation was greater in higher-nitrogen leaves but became smaller in cases associated with yellower-leaf modification, abaxial measurement, or panicle pruning. The feedback inhibition of panicle pruning on rates of TPU was not always observed because panicle pruning blocked nitrogen remobilisation from leaves to grains, and the increased SLN masked the feedback inhibition. The (sub)foliar TPU limitation can be modulated by whole-plant source-sink ratios and nitrogen budgets during rice grain filling, suggesting a close link between sub-foliar and whole-plant sink limitations.

Keywords: triose phosphate utilization, sink limitation, panicle pruning, oryza sativa

Procedia PDF Downloads 73
1000 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading

Authors: Robert Caulk

Abstract:

A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.

Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration

Procedia PDF Downloads 75
999 Undergraduates Learning Preferences: A Comparison of Science, Technology and Social Science Academic Disciplines in Relations to Teaching Designs and Strategies

Authors: Salina Budin, Shaira Ismail

Abstract:

Students learn effectively in a learning environment with a suitable teaching approach that matches their learning preferences. The main objective of the study is to examine the learning preferences amongst the students in the Science and Technology (S&T), and Social Science (SS) fields of study at the Universiti Teknologi Mara (UiTM), Pulau Pinang. The measurement instrument is based on the Dunn and Dunn Learning Styles which measure five elements of learning styles; environmental, sociological, emotional, physiological and psychological. Questionnaires are distributed amongst undergraduates in the Faculty of Mechanical Engineering and Faculty of Business Management. The respondents comprise of 131 diploma students of the Faculty of Mechanical Engineering and 111 degree students of the Faculty of Business Management. The results indicate that, both S&T and SS students share a similar learning preferences on the environmental aspect, emotional preferences, motivational level, learning responsibility, persistent level in learning and learning structure. Most of the S&T students are concluded as analytical learners and the majority of SS students are global learners. Both S&T and SS students are concluded as visual learners, preferred to be in an active mobility in a relaxing and enjoying mode with some light of refreshments during the learning process and exhibited reflective characteristics in learning. Obviously, the S&T students are considered as left brain dominant, whereas the SS students are right brain dominant. The findings highlighted that both categories of students exhibited similar learning preferences except on psychological preferences.

Keywords: learning preferences, Dunn and Dunn learning style, teaching approach, science and technology, social science

Procedia PDF Downloads 229
998 Influence of Specimen Geometry (10*10*40), (12*12*60) and (5*20*120), on Determination of Toughness of Concrete Measurement of Critical Stress Intensity Factor: A Comparative Study

Authors: M. Benzerara, B. Redjel, B. Kebaili

Abstract:

The cracking of the concrete is a more crucial problem with the development of the complex structures related to technological progress. The projections in the knowledge of the breaking process make it possible today for better prevention of the risk of the fracture. The breaking strength brutal of a quasi-fragile material like the concrete called Toughness is measured by a breaking value of the factor of the intensity of the constraints K1C for which the crack is propagated, it is an intrinsic property of the material. Many studies reported in the literature treating of the concrete were carried out on specimens which are in fact inadequate compared to the intrinsic characteristic to identify. We started from this established fact, in order to compare the evolution of the parameter of toughness K1C measured by calling upon ordinary concrete specimens of three prismatic geometries different (10*10*40) Cm3, (12*12*60) Cm3 & (5*20*120) Cm3 containing from the side notches various depths simulating of the cracks was set up.The notches are carried out using triangular pyramidal plates into manufactured out of sheet coated placed at the center of the specimens at the time of the casting, then withdrawn to leave the trace of a crack. The tests are carried out in 3 points bending test in mode 1 of fracture, by using the techniques of mechanical fracture. The evolution of the parameter of toughness K1C measured with the three geometries specimens gives almost the same results. They are acceptable and return in the beach of the results determined by various researchers (toughness of the ordinary concrete turns to the turn of the 1 MPa √m). These results inform us about the presence of an economy on the level of the geometry specimen (5*20*120) Cm3, therefore, to use plates specimens later if one wants to master the toughness of this material complexes, astonishing but always essential that is the concrete.

Keywords: concrete, fissure, specimen, toughness

Procedia PDF Downloads 288
997 Study of Age-Dependent Changes of Peripheral Blood Leukocytes Apoptotic Properties

Authors: Anahit Hakobjanyan, Zdenka Navratilova, Gabriela Strakova, Martin Petrek

Abstract:

Aging has a suppressive influence on human immune cells. Apoptosis may play important role in age-dependent immunosuppression and lymphopenia. Prevention of apoptosis may be promoted by BCL2-dependent and BCL2-independent manner. BCL2 is an antiapoptotic factor that has an antioxidative role by locating the glutathione at mitochondria and repressing oxidative stress. STAT3 may suppress apoptosis in BCL2-independent manner and promote cell survival blocking cytochrome-c release and reducing ROS production. The aim of our study was to estimate the influence of aging on BCL2-dependent and BCL2-independent prevention of apoptosis via measurement of BCL2 and STAT3 mRNAs expressions. The study was done on Armenian population (2 groups: 37 healthy young (mean age±SE; min/max age, male/female: 37.6±1.1; 20/54, 15/22), 28 healthy aged (66.7±1.5; 57/85, 12/16)). mRNA expression in peripheral blood leukocytes (PBL) was determined by RT-PCR using PSMB2 as the reference gene. Statistical analysis was done with Graph-Pad Prism 5; P < 0.05 considered as significant. The expression of BCL2 mRNA was lower in aged group (0.199) compared with young ones (0.643)(p < 0.01). Decrease expression was also recorded for female and male subgroups (p < 0.01). The expression level of STAT3 mRNA was increased (young, 0.228; aged, 0.428) (p < 0.05) during aging (in the whole age group and male/female subgroups). Decreased level of BCL2 mRNA may indicate about the suppression of BCL2-dependent prevention of apoptosis during aging in peripheral blood leukocytes. At the same time increased the level of STAT3 may suggest about activation of BCL2-independent prevention of apoptosis during aging.

Keywords: BCL2, STAT3, aging, apoptosis

Procedia PDF Downloads 307
996 Observation of the Orthodontic Tooth's Long-Term Movement Using Stereovision System

Authors: Hao-Yuan Tseng, Chuan-Yang Chang, Ying-Hui Chen, Sheng-Che Chen, Chih-Han Chang

Abstract:

Orthodontic tooth treatment has demonstrated a high success rate in clinical studies. It has been agreed upon that orthodontic tooth movement is based on the ability of surrounding bone and periodontal ligament (PDL) to react to a mechanical stimulus with remodeling processes. However, the mechanism of the tooth movement is still unclear. Recent studies focus on the simple principle compression-tension theory while rare studies directly measure tooth movement. Therefore, tracking tooth movement information during orthodontic treatment is very important in clinical practice. The aim of this study is to investigate the mechanism responses of the tooth movement during the orthodontic treatments. A stereovision system applied to track the tooth movement of the patient with the stamp brackets. The system was established by two cameras with their relative position calibrate. And the orthodontic force measured by 3D printing model with the six-axis load cell to determine the initial force application. The result shows that the stereovision system accuracy revealed the measurement presents a maximum error less than 2%. For the study on patient tracking, the incisor moved about 0.9 mm during 60 days tracking, and half of movement occurred in the first few hours. After removing the orthodontic force in 100 hours, the distance between before and after position incisor tooth decrease 0.5 mm consisted with the release of the phenomenon. Using the stereovision system can accurately locate the three-dimensional position of the teeth and superposition of 3D coordinate system for all the data to integrate the complex tooth movement.

Keywords: orthodontic treatment, tooth movement, stereovision system, long-term tracking

Procedia PDF Downloads 404
995 Development of Nondestructive Imaging Analysis Method Using Muonic X-Ray with a Double-Sided Silicon Strip Detector

Authors: I-Huan Chiu, Kazuhiko Ninomiya, Shin’ichiro Takeda, Meito Kajino, Miho Katsuragawa, Shunsaku Nagasawa, Atsushi Shinohara, Tadayuki Takahashi, Ryota Tomaru, Shin Watanabe, Goro Yabu

Abstract:

In recent years, a nondestructive elemental analysis method based on muonic X-ray measurements has been developed and applied for various samples. Muonic X-rays are emitted after the formation of a muonic atom, which occurs when a negatively charged muon is captured in a muon atomic orbit around the nucleus. Because muonic X-rays have higher energy than electronic X-rays due to the muon mass, they can be measured without being absorbed by a material. Thus, estimating the two-dimensional (2D) elemental distribution of a sample became possible using an X-ray imaging detector. In this work, we report a non-destructive imaging experiment using muonic X-rays at Japan Proton Accelerator Research Complex. The irradiated target consisted of polypropylene material, and a double-sided silicon strip detector, which was developed as an imaging detector for astronomical observation, was employed. A peak corresponding to muonic X-rays from the carbon atoms in the target was clearly observed in the energy spectrum at an energy of 14 keV, and 2D visualizations were successfully reconstructed to reveal the projection image from the target. This result demonstrates the potential of the non-destructive elemental imaging method that is based on muonic X-ray measurement. To obtain a higher position resolution for imaging a smaller target, a new detector system will be developed to improve the statistical analysis in further research.

Keywords: DSSD, muon, muonic X-ray, imaging, non-destructive analysis

Procedia PDF Downloads 192
994 The Effectiveness of Summative Assessment in Practice Learning

Authors: Abdool Qaiyum Mohabuth, Syed Munir Ahmad

Abstract:

Assessment enables students to focus on their learning, assessment. It engages them to work hard and motivates them in devoting time to their studies. Student learning is directly influenced by the type of assessment involved in the programme. Summative Assessment aims at providing measurement of student understanding. In fact, it is argued that summative assessment is used for reporting and reviewing, besides providing an overall judgement of achievement. While summative assessment is a well defined process for learning that takes place in the classroom environment, its application within the practice environment is still being researched. This paper discusses findings from a mixed-method study for exploring the effectiveness of summative assessment in practice learning. A survey questionnaire was designed for exploring the perceptions of mentors and students about summative assessment in practice learning. The questionnaire was administered to the University of Mauritius students and mentors who supervised students for their Work-Based Learning (WBL) practice at the respective placement settings. Some students, having undertaken their WBL practice, were interviewed, for capturing their views and experiences about the application of summative assessment in practice learning. Semi-structured interviews were also conducted with three experienced mentors who have assessed students on practice learning. The findings reveal that though learning in the workplace is entirely different from learning at the University, most students had positive experiences about their summative assessments in practice learning. They felt comfortable and confident to be assessed by their mentors in their placement settings and wished that the effort and time that they devoted to their learning be recognised and valued. Mentors on their side confirmed that the summative assessment is valid and reliable, enabling them to better monitor and coach students to achieve the expected learning outcomes.

Keywords: practice learning, judgement, summative assessment, knowledge, skills, workplace

Procedia PDF Downloads 328
993 Preparation of Fe3Si/Ferrite Micro-and Nano-Powder Composite

Authors: Radovan Bures, Madgalena Streckova, Maria Faberova, Pavel Kurek

Abstract:

Composite material based on Fe3Si micro-particles and Mn-Zn nano-ferrite was prepared using powder metallurgy technology. The sol-gel followed by autocombustion process was used for synthesis of Mn0.8Zn0.2Fe2O4 ferrite. 3 wt.% of mechanically milled ferrite was mixed with Fe3Si powder alloy. Mixed micro-nano powder system was homogenized by the Resonant Acoustic Mixing using ResodynLabRAM Mixer. This non-invasive homogenization technique was used to preserve spherical morphology of Fe3Si powder particles. Uniaxial cold pressing in the closed die at pressure 600 MPa was applied to obtain a compact sample. Microwave sintering of green compact was realized at 800°C, 20 minutes, in air. Density of the powders and composite was measured by Hepycnometry. Impulse excitation method was used to measure elastic properties of sintered composite. Mechanical properties were evaluated by measurement of transverse rupture strength (TRS) and Vickers hardness (HV). Resistivity was measured by 4 point probe method. Ferrite phase distribution in volume of the composite was documented by metallographic analysis. It has been found that nano-ferrite particle distributed among micro- particles of Fe3Si powder alloy led to high relative density (~93%) and suitable mechanical properties (TRS >100 MPa, HV ~1GPa, E-modulus ~140 GPa) of the composite. High electric resistivity (R~6.7 ohm.cm) of prepared composite indicate their potential application as soft magnetic material at medium and high frequencies.

Keywords: micro- and nano-composite, soft magnetic materials, microwave sintering, mechanical and electric properties

Procedia PDF Downloads 350
992 Agile Smartphone Porting and App Integration of Signal Processing Algorithms Obtained through Rapid Development

Authors: Marvin Chibuzo Offiah, Susanne Rosenthal, Markus Borschbach

Abstract:

Certain research projects in Computer Science often involve research on existing signal processing algorithms and developing improvements on them. Research budgets are usually limited, hence there is limited time for implementing the algorithms from scratch. It is therefore common practice, to use implementations provided by other researchers as a template. These are most commonly provided in a rapid development, i.e. 4th generation, programming language, usually Matlab. Rapid development is a common method in Computer Science research for quickly implementing and testing new developed algorithms, which is also a common task within agile project organization. The growing relevance of mobile devices in the computer market also gives rise to the need to demonstrate the successful executability and performance measurement of these algorithms on a mobile device operating system and processor, particularly on a smartphone. Open mobile systems such as Android, are most suitable for this task, which is to be performed most efficiently. Furthermore, efficiently implementing an interaction between the algorithm and a graphical user interface (GUI) that runs exclusively on the mobile device is necessary in cases where the project’s goal statement also includes such a task. This paper examines different proposed solutions for porting computer algorithms obtained through rapid development into a GUI-based smartphone Android app and evaluates their feasibilities. Accordingly, the feasible methods are tested and a short success report is given for each tested method.

Keywords: SMARTNAVI, Smartphone, App, Programming languages, Rapid Development, MATLAB, Octave, C/C++, Java, Android, NDK, SDK, Linux, Ubuntu, Emulation, GUI

Procedia PDF Downloads 468
991 Process Assessment Model for Process Capability Determination Based on ISO/IEC 20000-1:2011

Authors: Harvard Najoan, Sarwono Sutikno, Yusep Rosmansyah

Abstract:

Most enterprises are now using information technology services as their assets to support business objectives. These kinds of services are provided by the internal service provider (inside the enterprise) or external service provider (outside enterprise). To deliver quality information technology services, the service provider (which from now on will be called ‘organization’) either internal or external, must have a standard for service management system. At present, the standard that is recognized as best practice for service management system for the organization is international standard ISO/IEC 20000:2011. The most important part of this international standard is the first part or ISO/IEC 20000-1:2011-Service Management System Requirement, because it contains 22 for organization processes as a requirement to be implemented in an organizational environment in order to build, manage and deliver quality service to the customer. Assessing organization management processes is the first step to implementing ISO/IEC 20000:2011 into the organization management processes. This assessment needs Process Assessment Model (PAM) as an assessment instrument. PAM comprises two parts: Process Reference Model (PRM) and Measurement Framework (MF). PRM is built by transforming the 22 process of ISO/IEC 20000-1:2011 and MF is based on ISO/IEC 33020. This assessment instrument was designed to assess the capability of service management process in Divisi Teknologi dan Sistem Informasi (Information Systems and Technology Division) as an internal organization of PT Pos Indonesia. The result of this assessment model can be proposed to improve the capability of service management system.

Keywords: ISO/IEC 20000-1:2011, ISO/IEC 33020:2015, process assessment, process capability, service management system

Procedia PDF Downloads 449
990 An Adaptive Back-Propagation Network and Kalman Filter Based Multi-Sensor Fusion Method for Train Location System

Authors: Yu-ding Du, Qi-lian Bao, Nassim Bessaad, Lin Liu

Abstract:

The Global Navigation Satellite System (GNSS) is regarded as an effective approach for the purpose of replacing the large amount used track-side balises in modern train localization systems. This paper describes a method based on the data fusion of a GNSS receiver sensor and an odometer sensor that can significantly improve the positioning accuracy. A digital track map is needed as another sensor to project two-dimensional GNSS position to one-dimensional along-track distance due to the fact that the train’s position can only be constrained on the track. A model trained by BP neural network is used to estimate the trend positioning error which is related to the specific location and proximate processing of the digital track map. Considering that in some conditions the satellite signal failure will lead to the increase of GNSS positioning error, a detection step for GNSS signal is applied. An adaptive weighted fusion algorithm is presented to reduce the standard deviation of train speed measurement. Finally an Extended Kalman Filter (EKF) is used for the fusion of the projected 1-D GNSS positioning data and the 1-D train speed data to get the estimate position. Experimental results suggest that the proposed method performs well, which can reduce positioning error notably.

Keywords: multi-sensor data fusion, train positioning, GNSS, odometer, digital track map, map matching, BP neural network, adaptive weighted fusion, Kalman filter

Procedia PDF Downloads 233
989 Deep Learning for Renewable Power Forecasting: An Approach Using LSTM Neural Networks

Authors: Fazıl Gökgöz, Fahrettin Filiz

Abstract:

Load forecasting has become crucial in recent years and become popular in forecasting area. Many different power forecasting models have been tried out for this purpose. Electricity load forecasting is necessary for energy policies, healthy and reliable grid systems. Effective power forecasting of renewable energy load leads the decision makers to minimize the costs of electric utilities and power plants. Forecasting tools are required that can be used to predict how much renewable energy can be utilized. The purpose of this study is to explore the effectiveness of LSTM-based neural networks for estimating renewable energy loads. In this study, we present models for predicting renewable energy loads based on deep neural networks, especially the Long Term Memory (LSTM) algorithms. Deep learning allows multiple layers of models to learn representation of data. LSTM algorithms are able to store information for long periods of time. Deep learning models have recently been used to forecast the renewable energy sources such as predicting wind and solar energy power. Historical load and weather information represent the most important variables for the inputs within the power forecasting models. The dataset contained power consumption measurements are gathered between January 2016 and December 2017 with one-hour resolution. Models use publicly available data from the Turkish Renewable Energy Resources Support Mechanism. Forecasting studies have been carried out with these data via deep neural networks approach including LSTM technique for Turkish electricity markets. 432 different models are created by changing layers cell count and dropout. The adaptive moment estimation (ADAM) algorithm is used for training as a gradient-based optimizer instead of SGD (stochastic gradient). ADAM performed better than SGD in terms of faster convergence and lower error rates. Models performance is compared according to MAE (Mean Absolute Error) and MSE (Mean Squared Error). Best five MAE results out of 432 tested models are 0.66, 0.74, 0.85 and 1.09. The forecasting performance of the proposed LSTM models gives successful results compared to literature searches.

Keywords: deep learning, long short term memory, energy, renewable energy load forecasting

Procedia PDF Downloads 248
988 Study on the Mechanical Properties of Bamboo Fiber-Reinforced Polypropylene Based Composites: Effect of Gamma Radiation

Authors: Kamrun N. Keya, Nasrin A. Kona, Ruhul A. Khan

Abstract:

Bamboo fiber (BF) reinforced polypropylene (PP) based composites were fabricated by a conventional compression molding technique. In this investigation, bamboo composites were manufactured using different percentages of fiber, which were varying from 25-65% on the total weight of the composites. To fabricate the BF/PP composites untreated and treated fibers were selected. A systematic study was done to observe the physical, mechanical, and interfacial behavior of the composites. In this study, mechanical properties of the composites such as tensile, impact, and bending properties were observed precisely. Maximum tensile strength (TS) and bending strength (BS) were found for 50 wt% fiber composites, 65 MPa, and 85.5 MPa respectively, whereas the highest tensile modulus (TM) and bending modulus (BM) was examined, 5.73 GPa and 7.85 GPa respectively. The BF/PP based composites were treated with irradiated under gamma radiation (the source strength 50 kCi Cobalt-60) of various doses (i.e. 10, 20, 30, 40, 50 and 60 kGy doses). The effect of gamma radiation on the composites was also investigated, and it found that the effect of 30.0 kGy (i.e. units for radiation measurement is 'gray', kGy=kilogray) gamma dose showed better mechanical properties than other doses. After flexural testing, fracture sides of the untreated and treated both composites were studied by scanning electron microscope (SEM). SEM results of the treated BF/PP based composites showed better fiber-matrix adhesion and interfacial bonding than untreated BF/PP based composites. Water uptake and soil degradation tests of untreated and treated composites were also investigated.

Keywords: bamboo fiber, polypropylene, compression molding technique, gamma radiation, mechanical properties, scanning electron microscope

Procedia PDF Downloads 122
987 Force Measurement for E-Cadherin-Mediated Intercellular Adhesion Probed by Protein Micropattern and Traction Force Microscopy

Authors: Chieh-Chung Tsou, Chun-Min Lo, Yeh-Shiu Chu

Abstract:

Cell’s mechanical forces provide important physical cues in regulation of proper cellular functions, such as cell differentiation, proliferation and migration. It is believed that adhesive forces generated by cell-cell interaction are able to transmit to the interior of cell through filamentous cortical cytoskeleton. Prominent among other membrane receptors, Cadherins are prototypical adhesive molecules able to generate remarkable forces to regulate intercellular adhesion. However, the mechanistic steps of mechano-transduction in Cadherin-mediated adhesion remain very controversial. We are interested in understanding how Cadherin protein complexes enable force generation and transmission at cell-cell contact in the initial stage of intercellular adhesion. For providing a better control of time, space, and substrate stiffness, in this study, a combination of protein micropattern, micropipette manipulation, and traction force microscopy is used. Pair micropattern with different forms confines cell spreading area and the gaps in pairs varied from 2 to 8 microns are applied for monitoring the forces that cell pairs generated, measured by traction force microscopy. Moreover, cell clones obtained from epithelial cells undergone genome editing are used to score the importance for known components of Cadherin complexes in force generation. We believe that our results from this combinatory mechanobiological method will provide deep insights on understanding the biophysical principle governing mechano- transduction of Cadherin-mediated intercellular adhesion.

Keywords: cadherin, intercellular adhesion, protein micropattern, traction force microscopy

Procedia PDF Downloads 239
986 Enhancing Cybersecurity Protective Behaviour: Role of Information Security Competencies and Procedural Information Security Countermeasure Awareness

Authors: Norshima Humaidi, Saif Hussein Abdallah Alghazo

Abstract:

Cybersecurity threat have become a serious issue recently, and one of the cause is because human error, which is usually constituted by carelessness, ignorance, and failure to practice cybersecurity behaviour adequately. Using a data from a quantitative survey, Partial Least Squares-Structural Equation Modelling (PLS-SEM) analysis was used to determine the factors that affect cybersecurity protective behaviour (CPB). This study adapts cybersecurity protective behaviour model by focusing on two constructs that can enhance CPB: manager’s information security competencies (MISI) and procedural information security countermeasure (PCM) awareness. Theory of leadership competencies were adapted to measure user’s perception towards competencies among security managers/leader in the organization. Confirmatory factor analysis (CFA) testing shows that all the measurement items of each constructs were adequate in their validity individually based on their factor loading value. Moreover, each constructs are valid based on their parameter estimates and statistical significance. The quantitative research findings show that PCM awareness strongly influences CPB compared to MISI. Meanwhile, MISI was significantlyPCM awarenss. This study believes that the research findings can contribute to human behaviour in IS studies and are particularly beneficial to policy makers in improving organizations’ strategic plans in information security, especially in this new era. Most organizations spend time and resources to provide and establish strategic plans of information security; however, if employees are not willing to comply and practice information security behaviour appropriately, then these efforts are in vain.

Keywords: cybersecurity, protection behaviour, information security, information security competencies, countermeasure awareness

Procedia PDF Downloads 82
985 Development of a Practical Screening Measure for the Prediction of Low Birth Weight and Neonatal Mortality in Upper Egypt

Authors: Prof. Ammal Mokhtar Metwally, Samia M. Sami, Nihad A. Ibrahim, Fatma A. Shaaban, Iman I. Salama

Abstract:

Objectives: Reducing neonatal mortality by 2030 is still a challenging goal in developing countries. low birth weight (LBW) is a significant contributor to this, especially where weighing newborns is not possible routinely. The present study aimed to determine a simple, easy, reliable anthropometric measure(s) that can predict LBW) and neonatal mortality. Methods: A prospective cohort study of 570 babies born in districts of El Menia governorate, Egypt (where most deliveries occurred at home) was examined at birth. Newborn weight, length, head, chest, mid-arm, and thigh circumferences were measured. Follow up of the examined neonates took place during their first four weeks of life to report any mortalities. The most predictable anthropometric measures were determined using the statistical package of SPSS, and multiple Logistic regression analysis was performed.: Results: Head and chest circumferences with cut-off points < 33 cm and ≤ 31.5 cm, respectively, were the significant predictors for LBW. They carried the best combination of having the highest sensitivity (89.8 % & 86.4 %) and least false negative predictive value (1.4 % & 1.7 %). Chest circumference with a cut-off point ≤ 31.5 cm was the significant predictor for neonatal mortality with 83.3 % sensitivity and 0.43 % false negative predictive value. Conclusion: Using chest circumference with a cut-off point ≤ 31.5 cm is recommended as a single simple anthropometric measurement for the prediction of both LBW and neonatal mortality. The predicted measure could act as a substitute for weighting newborns in communities where scales to weigh them are not routinely available.

Keywords: low birth weight, neonatal mortality, anthropometric measures, practical screening

Procedia PDF Downloads 80
984 Air Quality Assessment for a Hot-Spot Station by Neural Network Modelling of the near-Traffic Emission-Immission Interaction

Authors: Tim Steinhaus, Christian Beidl

Abstract:

Urban air quality and climate protection are two major challenges for future mobility systems. Despite the steady reduction of pollutant emissions from vehicles over past decades, local immission load within cities partially still reaches heights, which are considered hazardous to human health. Although traffic-related emissions account for a major part of the overall urban pollution, modeling the exact interaction remains challenging. In this paper, a novel approach for the determination of the emission-immission interaction on the basis of neural network modeling for traffic induced NO2-immission load within a near-traffic hot-spot scenario is presented. In a detailed sensitivity analysis, the significance of relevant influencing variables on the prevailing NO2 concentration is initially analyzed. Based on this, the generation process of the model is described, in which not only environmental influences but also the vehicle fleet composition including its associated segment- and certification-specific real driving emission factors are derived and used as input quantities. The validity of this approach, which has been presented in the past, is re-examined in this paper using updated data on vehicle emissions and recent immission measurement data. Within the framework of a final scenario analysis, the future development of the immission load is forecast for different developments in the vehicle fleet composition. It is shown that immission levels of less than half of today’s yearly average limit values are technically feasible in hot-spot situations.

Keywords: air quality, emission, emission-immission-interaction, immission, NO2, zero impact

Procedia PDF Downloads 113
983 The Use of Sustainability Criteria on Infrastructure Design to Encourage Sustainable Engineering Solutions on Infrastructure Projects

Authors: Shian Saroop, Dhiren Allopi

Abstract:

In order to stay competitive and to meet upcoming stricter environmental regulations and customer requirements, designers have a key role in designing civil infrastructure so that it is environmentally sustainable. There is an urgent need for engineers to apply technologies and methods that deliver better and more sustainable performance of civil infrastructure as well as a need to establish a standard of measurement for greener infrastructure, rather than merely use tradition solutions. However, there are no systems in place at the design stage that assesses the environmental impact of design decisions on township infrastructure projects. This paper identifies alternative eco-efficient civil infrastructure design solutions and developed sustainability criteria and a toolkit to analyse the eco efficiency of infrastructure projects. The proposed toolkit is aimed at promoting high-performance, eco-efficient, economical and environmentally friendly design decisions on stormwater, roads, water and sanitation related to township infrastructure projects. These green solutions would bring a whole new class of eco-friendly solutions to current infrastructure problems, while at the same time adding a fresh perspective to the traditional infrastructure design process. A variety of projects were evaluated using the green infrastructure toolkit and their results are compared to each other, to assess the results of using greener infrastructure verses the traditional method of designing infrastructure. The application of ‘green technology’ would ensure a sustainable design of township infrastructure services assisting the design to consider alternative resources, the environmental impacts of design decisions, ecological sensitivity issues, innovation, maintenance and materials, at the design stage of a project.

Keywords: eco-efficiency, green infrastructure, infrastructure design, sustainable development

Procedia PDF Downloads 210
982 Ideation, Plans, and Attempts for Suicide among Adolescents with Disability

Authors: Nyla Anjum, Humaira Bano

Abstract:

Disability, regardless of its type and nature limits one or two significant life activities. These limitations constitute risk factors for suicide. Rate and intensity of problem upsurges in critical age of adolescence. Researches in the field of mental health over look problem of suicide among persons with disability. Aim of the study was to investigate prevalence and risk factors for suicide among adolescents with disability. The study constitutes purposive sample of 106 elements of both gender with four major categories of disability: hearing impairment, physical impairment, visual impairment and intellectual disabilities. Face to face interview technique was opted for data collection. Other variable are: socio-economic status, social and family support, provision of services for persons with disability, education and employment opportunities. For data analysis independent sample t-test was applied to find out significant differences in gender and One Way Analysis of variance was run to find out differences among four types of disability. Major predictors of suicide were identified with multiple regression analysis. It is concluded that ideation, plans and attempts of suicide among adolescents with disability is a multifaceted and imperative concern in the area of mental health. Urgent research recommendations contains valid measurement of suicide rate and identification of more risk factors for suicide among persons with disability. Study will also guide towards prevention of this pressing problem and will bring message of happy and healthy life not only for persons with disability but also for their families. It will also help to reduce suicide rate in society.

Keywords: suicide, risk factors, adolescent, disability, mental health

Procedia PDF Downloads 369
981 A Comparative Study of Substituted Li Ferrites Sintered by the Conventional and Microwave Sintering Technique

Authors: Ibetombi Soibam

Abstract:

Li-Zn-Ni ferrite having the compositional formula Li0.4-0.5xZn0.2NixFe2.4-0.5xO4 where x = 0.02 ≤ x ≤0.1 in steps of 0.02 was fabricated by the citrate precursor method. In this method, metal nitrates and citric acid was used to prepare the gel which exhibit self-propagating combustion behavior giving the required ferrite sample. The ferrite sample was given a pre-firing at 650°C in a programmable conventional furnace for 3 hours with a heating rate of 5°C/min. A series of the sample was finally given conventional sintering (CS) at 1040°C after the pre-firing process. Another series was given microwave sintering (MS) at 1040°C in a programmable microwave furnace which uses a single magnetron operating at 2.45 GHz frequency. X- ray diffraction pattern confirmed the spinel phase structure for both the series. The theoretical and experimental density was calculated. It was observed that densification increases with the increase in Ni concentration in both the series. However, samples sintered by microwave technique was found to be denser. The microstructure of the two series of the sample was examined using scanning electron microscopy (SEM). Dielectric properties have been investigated as a function of frequency and composition for both series of samples sintered by CS and MS technique. The variation of dielectric constant with frequency show dispersion for both the series. It was explained in terms of Koop’s two layer model. From the analysis of dielectric measurement, it was observed that the value of room temperature dielectric constant decreases with the increase in Ni concentration for both the series. The microwave sintered samples show a lower dielectric constant making microwave sintering suitable for high-frequency applications. The possible mechanisms contributing to all the above behavior is being discussed.

Keywords: citrate precursor, dielectric constant, ferrites, microwave sintering

Procedia PDF Downloads 389
980 Thermal Method for Testing Small Chemisorbent Samples on the Base of Potassium Superoxide

Authors: Pavel V. Balabanov, Daria A. Liubimova, Aleksandr P. Savenkov

Abstract:

The increase of technogenic and natural accidents, accompanied by air pollution, for example, by combustion products, leads to the necessity of respiratory protection. This work is devoted to the development of a calorimetric method and a device which allow investigating quickly the kinetics of carbon dioxide sorption by chemo-sorbents on the base of potassium superoxide in order to assess the protective properties of respiratory protective closed-circuit apparatus. The features of the traditional approach for determining the sorption properties in a thin layer of chemo-sorbent are described, as well as methods and devices, which can be used for the sorption kinetics study. The authors of the paper developed an approach (as opposed to the traditional approach) based on the power measurement of internal heat sources in the chemo-sorbent layer. The emergence of the heat sources is a result of the exothermic reaction of carbon dioxide sorption. This approach eliminates the necessity of chemical analysis of samples and can significantly reduce the time and material expenses during chemo-sorbents testing. The error of determining the volume fraction of adsorbed carbon dioxide by the developed method does not exceed 12%. Taking into account the efficiency of the method, we consider that it is a good alternative to traditional methods of chemical analysis under the assessment of the protection sorbents quality.

Keywords: carbon dioxide chemisorption, exothermic reaction, internal heat sources, respiratory protective apparatus

Procedia PDF Downloads 396
979 Air Dispersion Model for Prediction Fugitive Landfill Gaseous Emission Impact in Ambient Atmosphere

Authors: Moustafa Osman Mohammed

Abstract:

This paper will explore formation of HCl aerosol at atmospheric boundary layers and encourages the uptake of environmental modeling systems (EMSs) as a practice evaluation of gaseous emissions (“framework measures”) from small and medium-sized enterprises (SMEs). The conceptual model predicts greenhouse gas emissions to ecological points beyond landfill site operations. It focuses on incorporation traditional knowledge into baseline information for both measurement data and the mathematical results, regarding parameters influence model variable inputs. The paper has simplified parameters of aerosol processes based on the more complex aerosol process computations. The simple model can be implemented to both Gaussian and Eulerian rural dispersion models. Aerosol processes considered in this study were (i) the coagulation of particles, (ii) the condensation and evaporation of organic vapors, and (iii) dry deposition. The chemical transformation of gas-phase compounds is taken into account photochemical formulation with exposure effects according to HCl concentrations as starting point of risk assessment. The discussion set out distinctly aspect of sustainability in reflection inputs, outputs, and modes of impact on the environment. Thereby, models incorporate abiotic and biotic species to broaden the scope of integration for both quantification impact and assessment risks. The later environmental obligations suggest either a recommendation or a decision of what is a legislative should be achieved for mitigation measures of landfill gas (LFG) ultimately.

Keywords: air pollution, landfill emission, environmental management, monitoring/methods and impact assessment

Procedia PDF Downloads 303
978 Effect of Clerodendrum Species on Oxidative Stress with Possible Implication in Alleviating Carcinogenesis

Authors: Somit Dutta, Pallab Kar, Arnab Kumar Chakraborty, Arnab Sen, Tapas Kumar Chaudhuri

Abstract:

In the present study three species of Clerodendrum; Clerodendrum indicum, Volkameria inermis and Clerodendrum colebrookianum were used to investigate the possible activity against oxidative stress. A detailed in-vivo and in-vitro antioxidant profiling, directly associated with inflammation-related carcinogenesis, has been executed with a motive to evaluate the free radical scavenging activity of Clerodendrum extract. Measurement of cell viability and ROS generation in HEK-293 (Human Embryonic Kidney Cell Line) cells was also estimated. The immune cell proliferative properties (MTT) and in-vitro assay for evaluation of their antioxidant activities including hydroxyl radical, nitric oxide, singlet oxygen, peroxinitrate and hydrogen peroxide, etc. were investigated. GC-MS and FTIR analyses have been performed to identify the active biological compounds. These active biological compounds were further studied to assess their potential medicinal properties, aided by molecular docking and interaction analysis between the active compounds and different proteins related to oxidative stress leading to progression of carcinogenesis. The research article clearly demonstrates the role of ROS in various phases of carcinogenesis. Therefore, the antioxidant and free radical scavenging capacity of all the Clerodendrum species might prove beneficial for the immune system. It might be concluded that this plant species offers great promise for cancer prevention and therapy due to the presence of several bioactive compounds and potent antioxidant capacity of C. colebrookianum.

Keywords: antioxidant, cancer, oxidative stress, reactive oxygen species (ROS)

Procedia PDF Downloads 261
977 Vibration Analysis and Optimization Design of Ultrasonic Horn

Authors: Kuen Ming Shu, Ren Kai Ho

Abstract:

Ultrasonic horn has the functions of amplifying amplitude and reducing resonant impedance in ultrasonic system. Its primary function is to amplify deformation or velocity during vibration and focus ultrasonic energy on the small area. It is a crucial component in design of ultrasonic vibration system. There are five common design methods for ultrasonic horns: analytical method, equivalent circuit method, equal mechanical impedance, transfer matrix method, finite element method. In addition, the general optimization design process is to change the geometric parameters to improve a single performance. Therefore, in the general optimization design process, we couldn't find the relation of parameter and objective. However, a good optimization design must be able to establish the relationship between input parameters and output parameters so that the designer can choose between parameters according to different performance objectives and obtain the results of the optimization design. In this study, an ultrasonic horn provided by Maxwide Ultrasonic co., Ltd. was used as the contrast of optimized ultrasonic horn. The ANSYS finite element analysis (FEA) software was used to simulate the distribution of the horn amplitudes and the natural frequency value. The results showed that the frequency for the simulation values and actual measurement values were similar, verifying the accuracy of the simulation values. The ANSYS DesignXplorer was used to perform Response Surface optimization, which could shows the relation of parameter and objective. Therefore, this method can be used to substitute the traditional experience method or the trial-and-error method for design to reduce material costs and design cycles.

Keywords: horn, natural frequency, response surface optimization, ultrasonic vibration

Procedia PDF Downloads 101