Search results for: data mining technique
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 30176

Search results for: data mining technique

26726 The Data Quality Model for the IoT based Real-time Water Quality Monitoring Sensors

Authors: Rabbia Idrees, Ananda Maiti, Saurabh Garg, Muhammad Bilal Amin

Abstract:

IoT devices are the basic building blocks of IoT network that generate enormous volume of real-time and high-speed data to help organizations and companies to take intelligent decisions. To integrate this enormous data from multisource and transfer it to the appropriate client is the fundamental of IoT development. The handling of this huge quantity of devices along with the huge volume of data is very challenging. The IoT devices are battery-powered and resource-constrained and to provide energy efficient communication, these IoT devices go sleep or online/wakeup periodically and a-periodically depending on the traffic loads to reduce energy consumption. Sometime these devices get disconnected due to device battery depletion. If the node is not available in the network, then the IoT network provides incomplete, missing, and inaccurate data. Moreover, many IoT applications, like vehicle tracking and patient tracking require the IoT devices to be mobile. Due to this mobility, If the distance of the device from the sink node become greater than required, the connection is lost. Due to this disconnection other devices join the network for replacing the broken-down and left devices. This make IoT devices dynamic in nature which brings uncertainty and unreliability in the IoT network and hence produce bad quality of data. Due to this dynamic nature of IoT devices we do not know the actual reason of abnormal data. If data are of poor-quality decisions are likely to be unsound. It is highly important to process data and estimate data quality before bringing it to use in IoT applications. In the past many researchers tried to estimate data quality and provided several Machine Learning (ML), stochastic and statistical methods to perform analysis on stored data in the data processing layer, without focusing the challenges and issues arises from the dynamic nature of IoT devices and how it is impacting data quality. A comprehensive review on determining the impact of dynamic nature of IoT devices on data quality is done in this research and presented a data quality model that can deal with this challenge and produce good quality of data. This research presents the data quality model for the sensors monitoring water quality. DBSCAN clustering and weather sensors are used in this research to make data quality model for the sensors monitoring water quality. An extensive study has been done in this research on finding the relationship between the data of weather sensors and sensors monitoring water quality of the lakes and beaches. The detailed theoretical analysis has been presented in this research mentioning correlation between independent data streams of the two sets of sensors. With the help of the analysis and DBSCAN, a data quality model is prepared. This model encompasses five dimensions of data quality: outliers’ detection and removal, completeness, patterns of missing values and checks the accuracy of the data with the help of cluster’s position. At the end, the statistical analysis has been done on the clusters formed as the result of DBSCAN, and consistency is evaluated through Coefficient of Variation (CoV).

Keywords: clustering, data quality, DBSCAN, and Internet of things (IoT)

Procedia PDF Downloads 143
26725 Effect of Burdock Root Extract Concentration on Physiochemical Property of Coated Jasmine Rice by Using Top-Spay Fluidized Bed Coating Technique

Authors: Donludee Jaisut, Norihisa Kato, Thanutchaporn Kumrungsee, Kiyoshi Kawai, Somkiat Prachayawarakorn, Patchalee Tungtrakul

Abstract:

Jasmine Rice is a principle food of Thai people. However, glycemic index of jasmine rice is in high level, risk of type II diabetes after consuming. Burdock root is a good source of non-starch polysaccharides such as inulin. Inulin acts as prebiotic and helps reduce blood-sugar level. The purpose of this research was to reduce digestion rate of jasmine rice by coating burdock root extract on rice surface, using top-spay fluidized bed coating technique. Coating experiments were performed by spraying burdock root solution onto Jasmine rice kernels (Khao Dawk Mali-105; KDML), which had an initial moisture content of 11.6% wet basis, suspended in the fluidized bed. The experimental conditions were: solution spray rates of 31.7 mL/min, atomization pressure of 1.5 bar, spray time of 10 min, time of drying after spraying of 30 s, superficial air velocity of 3.2 m/s and drying temperatures of 60°C. The coated rice quality was evaluated in terms of the moisture content, texture, whiteness and digestion rate. The results showed that initial and final moisture contents of samples were the same in concentration 8% (v/v) and 10% (v/v). The texture was insignificantly changed from that of uncoated sample. The whiteness values were varied on concentration of burdock root extract. Coated samples were slower digested.

Keywords: burdock root, digestion, drying, rice

Procedia PDF Downloads 295
26724 Roasting Process of Sesame Seeds Modelling Using Gene Expression Programming: A Comparative Analysis with Response Surface Methodology

Authors: Alime Cengiz, Talip Kahyaoglu

Abstract:

Roasting process has the major importance to obtain desired aromatic taste of nuts. In this study, two kinds of roasting process were applied to hulled sesame seeds - vacuum oven and hot air roasting. Efficiency of Gene Expression Programming (GEP), a new soft computing technique of evolutionary algorithm that describes the cause and effect relationships in the data modelling system, and response surface methodology (RSM) were examined in the modelling of roasting processes over a range of temperature (120-180°C) for various times (30-60 min). Color attributes (L*, a*, b*, Browning Index (BI)), textural properties (hardness and fracturability) and moisture content were evaluated and modelled by RSM and GEP. The GEP-based formulations and RSM approach were compared with experimental results and evaluated according to correlation coefficients. The results showed that both GEP and RSM were found to be able to adequately learn the relation between roasting conditions and physical and textural parameters of roasted seeds. However, GEP had better prediction performance than the RSM with the high correlation coefficients (R2 >0.92) for the all quality parameters. This result indicates that the soft computing techniques have better capability for describing the physical changes occuring in sesame seeds during roasting process.

Keywords: genetic expression programming, response surface methodology, roasting, sesame seed

Procedia PDF Downloads 419
26723 New Security Approach of Confidential Resources in Hybrid Clouds

Authors: Haythem Yahyaoui, Samir Moalla, Mounir Bouden, Skander ghorbel

Abstract:

Nowadays, Cloud environments are becoming a need for companies, this new technology gives the opportunities to access to the data anywhere and anytime, also an optimized and secured access to the resources and gives more security for the data which stored in the platform, however, some companies do not trust Cloud providers, in their point of view, providers can access and modify some confidential data such as bank accounts, many works have been done in this context, they conclude that encryption methods realized by providers ensure the confidentiality, although, they forgot that Cloud providers can decrypt the confidential resources. The best solution here is to apply some modifications on the data before sending them to the Cloud in the objective to make them unreadable. This work aims on enhancing the quality of service of providers and improving the trust of the customers.

Keywords: cloud, confidentiality, cryptography, security issues, trust issues

Procedia PDF Downloads 380
26722 Recognizing and Prioritizing Effective Factors on Productivity of Human Resources Through Using Technique for Order of Preference by Similarity to Ideal Solution Method

Authors: Amirmehdi Dokhanchi, Babak Ziyae

Abstract:

Studying and prioritizing effective factors on productivity of human resources through TOPSIS method is the main aim of the present research study. For this reason, while reviewing concepts existing in productivity, effective factors were studied. Managers, supervisors, staff and personnel of Tabriz Tractor Manufacturing Company are considered subject of this study. Of total individuals, 160 of them were selected through the application of random sampling method as 'subject'. Two questionnaires were used for collecting data in this study. The factors, which had the highest effect on productivity, were recognized through the application of software packages. TOPSIS method was used for prioritizing recognized factors. For this reason, the second questionnaire was put available to statistics sample for studying effect of each of factors towards predetermined indicators. Therefore, decision-making matrix was obtained. The result of prioritizing factors shows that existence of accurate organizational strategy, high level of occupational skill, application of partnership and contribution system, on-the-job-training services, high quality of occupational life, dissemination of appropriate organizational culture, encouraging to creativity and innovation, and environmental factors are prioritized respectively.

Keywords: productivity of human resources, productivity indicators, TOPSIS, prioritizing factors

Procedia PDF Downloads 335
26721 Estimation of Chronic Kidney Disease Using Artificial Neural Network

Authors: Ilker Ali Ozkan

Abstract:

In this study, an artificial neural network model has been developed to estimate chronic kidney failure which is a common disease. The patients’ age, their blood and biochemical values, and 24 input data which consists of various chronic diseases are used for the estimation process. The input data have been subjected to preprocessing because they contain both missing values and nominal values. 147 patient data which was obtained from the preprocessing have been divided into as 70% training and 30% testing data. As a result of the study, artificial neural network model with 25 neurons in the hidden layer has been found as the model with the lowest error value. Chronic kidney failure disease has been able to be estimated accurately at the rate of 99.3% using this artificial neural network model. The developed artificial neural network has been found successful for the estimation of chronic kidney failure disease using clinical data.

Keywords: estimation, artificial neural network, chronic kidney failure disease, disease diagnosis

Procedia PDF Downloads 449
26720 Study on the Changes in Material Strength According to Changes in Forming Methods in Hot-Stamping Process

Authors: Yong-Jun Jeon, Hyung-Pil Park, Min-Jae Song, Baeg-Soon Cha

Abstract:

Following the recent trend of having increased demand in producing lighter-weight car bodies for improvement of automobile safety and gas mileage, there is a forming method that makes use of hot-stamping technique, which satisfies all conditions mentioned above. Hot-stamping is a forming technique with advantages of excellent formability, good dimensional precision and others since it is a process in which steel plates are heated up to temperatures of at least approximately 900°C after which forming is conducted in die at room temperature followed by rapid cooling. In addition, it has characteristics of allowing for improvement in material strength through achievement of quenching effect by having simultaneous forming and rapid cooling of material of high temperatures. However, there is insufficient information on the changes in material strength according to changes in material temperature with regards to material heating method and forming process in hot-stamping. Accordingly, this study aims to design and press die for T-type product of the scale models of the center pillar and to understand the changes in material strength in relation to changes in forming methods of hot-stamping process. Thus in order to understand the changes in material strength due to quenching effect among the hot-stamping process, material strength and material forming precision were to be studied while varying the forming and forming method when forming. For test methods, material strength was observed by using boron steel that has boron additives, which was heated up to 950°C, after which it was transferred to a die and was cooled down to material temperature of 400°C followed by air cooling process. During the forming and cooling process here, experiment was conducted with forming parameters of 2 holding rates and 3 flange heating rates wherein changing appearance in material strength according to changes forming method were observed by verifying forming strength and forming precision for each of the conditions.

Keywords: hot-stamping, formability, quenching, forming, press die, forming methods

Procedia PDF Downloads 464
26719 Patient Satisfaction Measurement Using Face-Q for Non-Incisional Double-Eyelid Blepharoplasty with Modified Single-Knot Continuous Buried Suture Technique

Authors: Kwei Huan Liw, Sashi B. Darshan

Abstract:

Background: Double eyelid surgery has become one of the most sought-after aesthetic procedures among Asians. Many surgeons perform surgical blepharoplasty and various other methods of non-incisional blepharoplasty. Face-Q is a validated method of measuring patient satisfaction for facial aesthetic procedures. Here we have analyzed the overall eye satisfaction score, the upper eyelid appraisal score and the adverse effect on eyes score Methods: 274 patients (548 eyes), aged between 18 to 40 years old, were recruited from 2015-2018. Each patient underwent a non-incisional double-eyelid blepharoplasty using a single-knotted continuous buried suture. 3 – 5 stab incisions were made depending on the upper eyelid size. A needle loaded with 7-0 nylon is passed from the lateral most wound through the dermis and the conjunctiva in an alternate fashion into the remaining stab wounds. The suture is then tunneled back laterally in the deeper dermis and knotted securely with the suture end. The knot is then buried within the orbicularis oculi muscle. Each patient was required to fill the Face-Q questionnaire before the procedure and 2 weeks post procedure. The results are described based on the percentage of the maximum achievable score. Patients were reviewed after 12 to 18 months to assess the long-term outcome. Results: The overall eye satisfaction score demonstrated a high level of post-operative satisfaction (97.85%), compared to 27.32% pre-operatively. The appraisal of upper eyelid scores showed drastic improvement in perception post-operatively (95.31%) compared to 21.44% pre-operatively. Adverse effect on eyes score showed a very low post-operative complication rate (0.4%) The long-term follow-up showed 6 cases that had developed asymmetrical folds. Only 1 patient agreed for revision surgery. The other 5 patients were still satisfied with the outcome and were not keen for revision surgery. None of the cases had loosening of knots. Conclusion: Modified single-knot continuous buried suture technique is a simple and non-invasive method to create aesthetically pleasing non-surgical double-eyelids, which has long-term effects. Proper patient selection is crucial and good surgical technique is required to achieve a desirable outcome.

Keywords: blepharoplasty, double-eyelid, face-Q, non-incisional

Procedia PDF Downloads 122
26718 Surrogacy in India: Emerging Business or Disguised Human Trafficking

Authors: Priya Sepaha

Abstract:

Commercial Surrogacy refers to a contract in which a woman carries a pregnancy for intended parents. There are two types of surrogacy; first, Traditional Surrogacy, in which, sperm of the donor or father is artificially inseminated in the women and carries the fetus till birth. Second, Gestational Surrogacy, in which the egg and sperm of the intended parent are collected for artificial fertilization through In Vitro Fertilization (IVF) technique and after the embryo formation, it is transferred into the womb of a surrogate mother with the help of Assisted Reproductive Technique. Surrogacy has become so widespread in India that it has now been nicknamed the "rent-a-womb" capital of the world due to relatively low cost and lack of stringent regulatory legalisation. The legal aspects surrounding surrogacy are complex, diverse and mostly unsettled. Although this appears to be beneficial for the parties concerned, there are certain sensitive issues which need to be addressed to ensure ample protection to all stakeholders. Commercial surrogacy is an emerging business and a new means of human trafficking particularly in India. Poor and illiterate women are often lured in such deals by their spouse or broker for earning easy money. Traffickers also use force, fraud, or coercion at times to intimidate the probable surrogate mothers. A major chunk of money received from covert surrogacy agreement is taken away by the brokers. The Law Commission of India has specifically reviewed the issue as India is emerging as a major global surrogacy destination. The Supreme Court of India held in the Manji's case in 2008, that commercial surrogacy can be permitted with certain restrictions but had directed the Legislature to pass an appropriate Law for governing Surrogacy in India. The draft Assisted Reproductive Technique (ART) Bill, 2010 is still pending for approval. At present, the Surrogacy Contract between the parties and the ART Clinics Guidelines are perhaps the only guiding force. The Immoral Trafficking Prevention Act (ITPA), 1956 and Sections 366(A) and 372 of the Indian Penal Code, 1860 are perhaps the only existing laws, which deal with human trafficking. Yet, none of these provisions specifically deal with the serious issue of trafficking for the purpose of Commercial Surrogacy. India remains one of the few countries that still allow commercial surrogacy. International Surrogacy involves bilateral issues, where the laws of both the nations have to be at par in order to ensure that the concerns and interests of parties involved get amicably resolved. There is urgent need to pass a comprehensive law by incorporating the latest developments in this field in order to make it ethical on the one hand and to curb disguised human trafficking on the other.

Keywords: business, human trafficking, legal, surrogacy

Procedia PDF Downloads 345
26717 Classification of Political Affiliations by Reduced Number of Features

Authors: Vesile Evrim, Aliyu Awwal

Abstract:

By the evolvement in technology, the way of expressing opinions switched the direction to the digital world. The domain of politics as one of the hottest topics of opinion mining research merged together with the behavior analysis for affiliation determination in text which constitutes the subject of this paper. This study aims to classify the text in news/blogs either as Republican or Democrat with the minimum number of features. As an initial set, 68 features which 64 are constituted by Linguistic Inquiry and Word Count (LIWC) features are tested against 14 benchmark classification algorithms. In the later experiments, the dimensions of the feature vector reduced based on the 7 feature selection algorithms. The results show that Decision Tree, Rule Induction and M5 Rule classifiers when used with SVM and IGR feature selection algorithms performed the best up to 82.5% accuracy on a given dataset. Further tests on a single feature and the linguistic based feature sets showed the similar results. The feature “function” as an aggregate feature of the linguistic category, is obtained as the most differentiating feature among the 68 features with 81% accuracy by itself in classifying articles either as Republican or Democrat.

Keywords: feature selection, LIWC, machine learning, politics

Procedia PDF Downloads 383
26716 Impact of Map Generalization in Spatial Analysis

Authors: Lin Li, P. G. R. N. I. Pussella

Abstract:

When representing spatial data and their attributes on different types of maps, the scale plays a key role in the process of map generalization. The process is consisted with two main operators such as selection and omission. Once some data were selected, they would undergo of several geometrical changing processes such as elimination, simplification, smoothing, exaggeration, displacement, aggregation and size reduction. As a result of these operations at different levels of data, the geometry of the spatial features such as length, sinuosity, orientation, perimeter and area would be altered. This would be worst in the case of preparation of small scale maps, since the cartographer has not enough space to represent all the features on the map. What the GIS users do is when they wanted to analyze a set of spatial data; they retrieve a data set and does the analysis part without considering very important characteristics such as the scale, the purpose of the map and the degree of generalization. Further, the GIS users use and compare different maps with different degrees of generalization. Sometimes, GIS users are going beyond the scale of the source map using zoom in facility and violate the basic cartographic rule 'it is not suitable to create a larger scale map using a smaller scale map'. In the study, the effect of map generalization for GIS analysis would be discussed as the main objective. It was used three digital maps with different scales such as 1:10000, 1:50000 and 1:250000 which were prepared by the Survey Department of Sri Lanka, the National Mapping Agency of Sri Lanka. It was used common features which were on above three maps and an overlay analysis was done by repeating the data with different combinations. Road data, River data and Land use data sets were used for the study. A simple model, to find the best place for a wild life park, was used to identify the effects. The results show remarkable effects on different degrees of generalization processes. It can see that different locations with different geometries were received as the outputs from this analysis. The study suggests that there should be reasonable methods to overcome this effect. It can be recommended that, as a solution, it would be very reasonable to take all the data sets into a common scale and do the analysis part.

Keywords: generalization, GIS, scales, spatial analysis

Procedia PDF Downloads 331
26715 Identity Verification Based on Multimodal Machine Learning on Red Green Blue (RGB) Red Green Blue-Depth (RGB-D) Voice Data

Authors: LuoJiaoyang, Yu Hongyang

Abstract:

In this paper, we experimented with a new approach to multimodal identification using RGB, RGB-D and voice data. The multimodal combination of RGB and voice data has been applied in tasks such as emotion recognition and has shown good results and stability, and it is also the same in identity recognition tasks. We believe that the data of different modalities can enhance the effect of the model through mutual reinforcement. We try to increase the three modalities on the basis of the dual modalities and try to improve the effectiveness of the network by increasing the number of modalities. We also implemented the single-modal identification system separately, tested the data of these different modalities under clean and noisy conditions, and compared the performance with the multimodal model. In the process of designing the multimodal model, we tried a variety of different fusion strategies and finally chose the fusion method with the best performance. The experimental results show that the performance of the multimodal system is better than that of the single modality, especially in dealing with noise, and the multimodal system can achieve an average improvement of 5%.

Keywords: multimodal, three modalities, RGB-D, identity verification

Procedia PDF Downloads 73
26714 The Development of Competency with a Training Curriculum via Electronic Media for Condominium Managers

Authors: Chisakan Papapankiad

Abstract:

The purposes of this research were 1) to study the competency of condominium managers, 2) to create the training curriculum via electronic media for condominium managers, and 3) to evaluate the training curriculum for condominium managers. The research methods included document analysis, interview, questionnaire, and a try-out. A total of 20 experts were selected to collect data by using Delphi technique. The designed curriculum was tried out with 30 condominium managers. The important steps of conducting this research included analyzing and synthesizing, creating interview questions, conducting factor analysis and developing the training curriculum, editing by experts, and trying out with sample groups. The findings revealed that there were five core competencies: leadership, human resources management, management, communication, and self-development. The training curriculum was designed and all the learning materials were put into a CD. The evaluation of the training curriculum was performed by five experts and the training curriculum was found to be cohesive and suitable for use in the real world. Moreover, the findings also revealed three important issues: 1) the competencies of the respondents after the experiment were higher than before the experiment and this had a level of significance of 0.01, 2) the competencies remained with the respondents at least 12 weeks and this also had a level of significance of 0.01, and 3) the overall level of satisfaction from the respondents were 'the highest level'.

Keywords: competency training curriculum, condominium managers, electronic media

Procedia PDF Downloads 288
26713 Unveiling the Impact of Ultra High Vacuum Annealing Levels on Physico-Chemical Properties of Bulk ZnSe Semiconductor

Authors: Kheira Hamaida, Mohamed Salah Halati

Abstract:

In this current paper, our aim work is to link as possible the obtained simulation results and the other experimental ones, just focusing on the electronic and optical properties of ZnSe. The predictive spectra of the total and partial densities of states using the Full Potential Linearized/Augmented Plane Wave method with the newly Tran-Blaha (TB) modified Becke-Johnson (mBJ) exchange-correlation potential (EXC). So the upper valence energy (UVE) levels contain the relative contribution of Se-(4p and 3d) states with considerable contribution from the electrons of Zn-2s orbital. The dielectric function of w-ZnSe, with its two parts, appears with a noticeable anisotropy character. The microscopic origins of the electronic states that are responsible for the observed peaks in the spectrum are determined through the decomposition of the spectrum to the individual contributions of the electronic transitions between the pairs of bands, where Vi is an occupied state in the valence band, and Ci is an unoccupied state in the conduction band. X-PES (X Ray-Photo Electron Spectroscopy) is an important technique used to probe the homogeneity, stoichiometry, and purity state of the title compound. In order to check the electron transitions derived from simulations and the others from Reflected Electron Energy Loss Spectroscopy (REELS) technique which was of great sensitivity, is used to determine the interband electronic transitions. In the optical window (Eg), all the electron energy states created were also determined through the specific gaussian deconvolution of the photoluminescence spectrum (PLS) that probed under a room temperature (RT).

Keywords: spectroscopy, WIEN2K, IIB-VIA semiconductors, dielectric function

Procedia PDF Downloads 68
26712 Non-Linear Causality Inference Using BAMLSS and Bi-CAM in Finance

Authors: Flora Babongo, Valerie Chavez

Abstract:

Inferring causality from observational data is one of the fundamental subjects, especially in quantitative finance. So far most of the papers analyze additive noise models with either linearity, nonlinearity or Gaussian noise. We fill in the gap by providing a nonlinear and non-gaussian causal multiplicative noise model that aims to distinguish the cause from the effect using a two steps method based on Bayesian additive models for location, scale and shape (BAMLSS) and on causal additive models (CAM). We have tested our method on simulated and real data and we reached an accuracy of 0.86 on average. As real data, we considered the causality between financial indices such as S&P 500, Nasdaq, CAC 40 and Nikkei, and companies' log-returns. Our results can be useful in inferring causality when the data is heteroskedastic or non-injective.

Keywords: causal inference, DAGs, BAMLSS, financial index

Procedia PDF Downloads 153
26711 Vibration-Based Data-Driven Model for Road Health Monitoring

Authors: Guru Prakash, Revanth Dugalam

Abstract:

A road’s condition often deteriorates due to harsh loading such as overload due to trucks, and severe environmental conditions such as heavy rain, snow load, and cyclic loading. In absence of proper maintenance planning, this results in potholes, wide cracks, bumps, and increased roughness of roads. In this paper, a data-driven model will be developed to detect these damages using vibration and image signals. The key idea of the proposed methodology is that the road anomaly manifests in these signals, which can be detected by training a machine learning algorithm. The use of various machine learning techniques such as the support vector machine and Radom Forest method will be investigated. The proposed model will first be trained and tested with artificially simulated data, and the model architecture will be finalized by comparing the accuracies of various models. Once a model is fixed, the field study will be performed, and data will be collected. The field data will be used to validate the proposed model and to predict the future road’s health condition. The proposed will help to automate the road condition monitoring process, repair cost estimation, and maintenance planning process.

Keywords: SVM, data-driven, road health monitoring, pot-hole

Procedia PDF Downloads 88
26710 Prevalence of Suicidal Behavioral Experiences in the Tertiary Institution: Implication for Childhood Development

Authors: Moses Onyemaechi Ede, Chinedu Ifedi Okeke

Abstract:

This study examined the prevalence of suicidal behavioural experience in a tertiary institution and its implication for childhood development. In pursuance of the objectives, two specific purposes, two research questions, and two null hypotheses guided this study. This is a descriptive design that utilized university student populations (N= 36,000 students) in the University of Nigeria Nsukka. The sample of the study was made up of 100 students. An accidental sampling technique was used to arrive at the sample. A self-developed questionnaire titled Suicidal Behaviour Questionnaire (SBQ) was used for this study. The data collected was analyzed using mean and percentages. The result showed that university students do not experience suicidal behaviours. It also showed that suicidal experiences are not prevalent. There is no significant influence of gender on the responses of male and female tertiary institution students based on their suicidal behavioural experiences. There is no significant influence of gender on the mean responses of male and female tertiary institution students on the prevalence of suicidal experiences. Based on the findings, it is recommended that there should be the teaching of suicide education and prevention in schools as well as mounting of bulletins on suicidology by the Guidance Counsellors.

Keywords: suicide, behavioural experiences, tertiary institution, childhood development

Procedia PDF Downloads 140
26709 Construction and Demolition Waste Management in Indian Cities

Authors: Vaibhav Rathi, Soumen Maity, Achu R. Sekhar, Abhijit Banerjee

Abstract:

Construction sector in India is extremely resource and carbon intensive. It contributes to significantly to national greenhouse emissions. At the resource end the industry consumes significant portions of the output from mining. Resources such as sand and soil are most exploited and their rampant extraction is becoming constant source of impact on environment and society. Cement is another resource that is used in abundance in building and construction and has a direct impact on limestone resources. Though India is rich in cement grade limestone resource, efforts have to be made for sustainable consumption of this resource to ensure future availability. Use of these resources in high volumes in India is a result of rapid urbanization. More cities have grown to a population of million plus in the last decade and million plus cities are growing further. To cater to needs of growing urban population of construction activities are inevitable in the coming future thereby increasing material consumption. Increased construction will also lead to substantial increase in end of life waste generation from Construction and Demolition (C&D). Therefore proper management of C&D waste has the potential to reduce environmental pollution as well as contribute to the resource efficiency in the construction sector. The present study deals with estimation, characterisation and documenting current management practices of C&D waste in 10 Indian cities of different geographies and classes. Based on primary data the study draws conclusions on the potential of C&D waste to be used as an alternative to primary raw materials. The estimation results show that India generates 716 million tons of C&D waste annually, placing the country as second largest C&D waste generator in the world after China. The study also aimed at utilization of C&D waste in to building materials. The waste samples collected from various cities have been used to replace 100% stone aggregates in paver blocks without any decrease in strength. However, management practices of C&D waste in cities still remains poor instead of notification of rules and regulations notified for C&D waste management. Only a few cities have managed to install processing plant and set up management systems for C&D waste. Therefore there is immense opportunity for management and reuse of C&D waste in Indian cities.

Keywords: building materials, construction and demolition waste, cities, environmental pollution, resource efficiency

Procedia PDF Downloads 308
26708 Determination of Fatigue Limit in Post Impacted Carbon Fiber Reinforced Epoxy Polymer (CFRP) Specimens Using Self Heating Methodology

Authors: Deepika Sudevan, Patrick Rozycki, Laurent Gornet

Abstract:

This paper presents the experimental identification of the fatigue limit for pristine and impacted Carbon Fiber Reinforced Epoxy polymer (CFRP) woven composites based on the relatively new self-heating methodology for composites. CFRP composites of [0/90]8 and quasi isotropic configurations prepared using hand-layup technique are subjected to low energy impacts (20 J energy) simulating a barely visible impact damage (BVID). Runway debris strike, tool drop or hailstone impact can cause a BVID on an aircraft fuselage made of carbon composites and hence understanding the post-impact fatigue response of CFRP laminates is of immense importance to the aerospace community. The BVID zone on the specimens is characterized using X-ray Tomography technique. Both pristine and impacted specimens are subjected to several blocks of constant amplitude (CA) fatigue loading keeping R-ratio a constant but with increments in the mean loading stress after each block. The number of loading cycles in each block is a subjective parameter and it varies for pristine and impacted CFRP specimens. To monitor the temperature evolution during fatigue loading, thermocouples are pasted on the CFRP specimens at specific locations. The fatigue limit is determined by two strategies, first is by considering the stabilized temperature in every block and second is by considering the change in the temperature slope per block. The results show that both strategies can be adopted to determine the fatigue limit in both pristine and impacted CFRP composites.

Keywords: CFRP, fatigue limit, low energy impact, self-heating, WRM

Procedia PDF Downloads 234
26707 Collaboration of Game Based Learning with Models Roaming the Stairs Using the Tajribi Method on the Eye PAI Lessons at the Ummul Mukminin Islamic Boarding School, Makassar South Sulawesi

Authors: Ratna Wulandari, Shahidin

Abstract:

This article aims to see how the Game Based Learning learning model with the Roaming The Stairs game makes a tajribi method can make PAI lessons active and interactive learning. This research uses a qualitative approach with a case study type of research. Data collection methods were carried out using interviews, observation, and documentation. Data analysis was carried out through the stages of data reduction, data display, and verification and drawing conclusions. The data validity test was carried out using the triangulation method. and drawing conclusions. The results of the research show that (1) children in grades 9A, 9B, and 9C like learning PAI using the Roaming The Stairs game (2) children in grades 9A, 9B, and 9C are active and can work in groups to solve problems in the Roaming The Stairs game (3) the class atmosphere becomes fun with learning method, namely learning while playing.

Keywords: game based learning, Roaming The Stairs, Tajribi PAI

Procedia PDF Downloads 25
26706 Francophone University Students' Attitudes Towards English Accents in Cameroon

Authors: Eric Agrie Ambele

Abstract:

The norms and models for learning pronunciation in relation to the teaching and learning of English pronunciation are key issues nowadays in English Language Teaching in ESL contexts. This paper discusses these issues based on a study on the attitudes of some Francophone university students in Cameroon towards three English accents spoken in Cameroon: Cameroon Francophone English (CamFE), Cameroon English (CamE), and Hyperlectal Cameroon English (near standard British English). With the desire to know more about the treatment that these English accents receive among these students, an aspect that had hitherto received little attention in the literature, a language attitude questionnaire, and the matched-guise technique was used to investigate this phenomenon. Two methods of data analysis were employed: (1) the percentage count procedure, and (2) the semantic differential scale. The findings reveal that the participants’ attitudes towards the selected accents vary in degree. Though Hyperlectal CamE emerged first, CamE second and CamFE third, no accent, on average, received a negative evaluation. It can be deduced from this findings that, first, CamE is gaining more and more recognition and can stand as an autonomous accent; second, that the participants all rated Hyperlectal CamE higher than CamE implies that they would be less motivated in a context where CamE is the learning model. By implication, in the teaching of English pronunciation to francophone learners learning English in Cameroon, Hyperlectal Cameroon English should be the model.

Keywords: teaching pronunciation, English accents, Francophone learners, attitudes

Procedia PDF Downloads 201
26705 Alpha-To-Omega Phase Transition in Bulk Nanostructured Ti and (α+β) Ti Alloys

Authors: Askar Kilmametov, Julia Ivanisenko, Boris Straumal, Horst Hahn

Abstract:

The high-pressure α- to ω-phase transition was discovered in elemental Ti and Zr fifty years ago using static high pressure and then observed to appear between 2 and 12 GPa at room temperature, depending on the experimental technique, the pressure environment, and the sample purity. The fact that ω-phase is retained in a metastable state in ambient condition after the removal of the pressure has been used to check the changes in magnetic and superconductive behavior, electron band structure and mechanical properties. However, the fundamental knowledge on a combination of both mechanical treatment and high applied pressure treatments for ω-phase formation in Ti alloys is currently lacking and has to be studied in relation to improved mechanical properties of bulk nanostructured states. In the present study, nanostructured (α+β) Ti alloys containing β-stabilizing elements such as Co, Fe, Cr, Nb were performed by severe plastic deformation, namely high pressure torsion (HPT) technique. HPT-induced α- to ω-phase transformation was revealed in dependence on applied pressure and shear strains by means of X-ray diffraction, transmission electron microscopy, and differential scanning calorimetry. The transformation kinetics was compared with the kinetics of pressure-induced transition. Orientation relationship between α-, β- and ω-phases was taken into consideration and analyzed according to theoretical calculation proposed earlier. The influence of initial state before HPT appeared to be considerable for subsequent α- to ω-phase transition. Thermal stability of the HPT-induced ω-phase was discussed as well in the frame of mechanical behavior of Ti and Ti-based alloys produced by shear deformation under high applied pressure.

Keywords: bulk nanostructured materials, high pressure phase transitions, severe plastic deformation, titanium alloys

Procedia PDF Downloads 420
26704 Analysis of Splicing Methods for High Speed Automated Fibre Placement Applications

Authors: Phillip Kearney, Constantina Lekakou, Stephen Belcher, Alessandro Sordon

Abstract:

The focus in the automotive industry is to reduce human operator and machine interaction, so manufacturing becomes more automated and safer. The aim is to lower part cost and construction time as well as defects in the parts, sometimes occurring due to the physical limitations of human operators. A move to automate the layup of reinforcement material in composites manufacturing has resulted in the use of tapes that are placed in position by a robotic deposition head, also described as Automated Fibre Placement (AFP). The process of AFP is limited with respect to the finite amount of material that can be loaded into the machine at any one time. Joining two batches of tape material together involves a splice to secure the ends of the finishing tape to the starting edge of the new tape. The splicing method of choice for the majority of prepreg applications is a hand stich method, and as the name suggests requires human input to achieve. This investigation explores three methods for automated splicing, namely, adhesive, binding and stitching. The adhesive technique uses an additional adhesive placed on the tape ends to be joined. Binding uses the binding agent that is already impregnated onto the tape through the application of heat. The stitching method is used as a baseline to compare the new splicing methods to the traditional technique currently in use. As the methods will be used within a High Speed Automated Fibre Placement (HSAFP) process, this meant the parameters of the splices have to meet certain specifications: (a) the splice must be able to endure a load of 50 N in tension applied at a rate of 1 mm/s; (b) the splice must be created in less than 6 seconds, dictated by the capacity of the tape accumulator within the system. The samples for experimentation were manufactured with controlled overlaps, alignment and splicing parameters, these were then tested in tension using a tensile testing machine. Initial analysis explored the use of the impregnated binding agent present on the tape, as in the binding splicing technique. It analysed the effect of temperature and overlap on the strength of the splice. It was found that the optimum splicing temperature was at the higher end of the activation range of the binding agent, 100 °C. The optimum overlap was found to be 25 mm; it was found that there was no improvement in bond strength from 25 mm to 30 mm overlap. The final analysis compared the different splicing methods to the baseline of a stitched bond. It was found that the addition of an adhesive was the best splicing method, achieving a maximum load of over 500 N compared to the 26 N load achieved by a stitching splice and 94 N by the binding method.

Keywords: analysis, automated fibre placement, high speed, splicing

Procedia PDF Downloads 157
26703 User Modeling from the Perspective of Improvement in Search Results: A Survey of the State of the Art

Authors: Samira Karimi-Mansoub, Rahem Abri

Abstract:

Currently, users expect high quality and personalized information from search results. To satisfy user’s needs, personalized approaches to web search have been proposed. These approaches can provide the most appropriate answer for user’s needs by using user context and incorporating information about query provided by combining search technologies. To carry out personalized web search, there is a need to make different techniques on whole of user search process. There are the number of possible deployment of personalized approaches such as personalized web search, personalized recommendation, personalized summarization and filtering systems and etc. but the common feature of all approaches in various domains is that user modeling is utilized to provide personalized information from the Web. So the most important work in personalized approaches is user model mining. User modeling applications and technologies can be used in various domains depending on how the user collected information may be extracted. In addition to, the used techniques to create user model is also different in each of these applications. Since in the previous studies, there was not a complete survey in this field, our purpose is to present a survey on applications and techniques of user modeling from the viewpoint of improvement in search results by considering the existing literature and researches.

Keywords: filtering systems, personalized web search, user modeling, user search behavior

Procedia PDF Downloads 282
26702 Wedding Organizer Strategy in the Era Covid-19 Pandemic In Surabaya, Indonesia

Authors: Rifky Cahya Putra

Abstract:

At this time of corona makes some countries affected difficult. As a result, many traders or companies are difficult to work in this pandemic era. So human activities in some fields must implement a new lifestyle or known as new normal. The transition from the one activity to another certainly requires high adaptation. So that almost in all sectors experience the impact of this phase, on of which is the wedding organizer. This research aims to find out what strategies are used so that the company can run in this pandemic. Techniques in data collection in the form interview to the owner of the wedding organizer and his team. Data analysis qualitative descriptive use interactive model analysis consisting of three main things, namely data reduction, data presentaion, and conclusion. For the result of the interview, the conclusion is that there are three strategies consisting of social media, sponsorship, and promotion.

Keywords: strategy, wedding organizer, pandemic, indonesia

Procedia PDF Downloads 138
26701 THz Phase Extraction Algorithms for a THz Modulating Interferometric Doppler Radar

Authors: Shaolin Allen Liao, Hual-Te Chien

Abstract:

Various THz phase extraction algorithms have been developed for a novel THz Modulating Interferometric Doppler Radar (THz-MIDR) developed recently by the author. The THz-MIDR differs from the well-known FTIR technique in that it introduces a continuously modulating reference branch, compared to the time-consuming discrete FTIR stepping reference branch. Such change allows real-time tracking of a moving object and capturing of its Doppler signature. The working principle of the THz-MIDR is similar to the FTIR technique: the incoming THz emission from the scene is split by a beam splitter/combiner; one of the beams is continuously modulated by a vibrating mirror or phase modulator and the other split beam is reflected by a reflection mirror; finally both the modulated reference beam and reflected beam are combined by the same beam splitter/combiner and detected by a THz intensity detector (for example, a pyroelectric detector). In order to extract THz phase from the single intensity measurement signal, we have derived rigorous mathematical formulas for 3 Frequency Banded (FB) signals: 1) DC Low-Frequency Banded (LFB) signal; 2) Fundamental Frequency Banded (FFB) signal; and 3) Harmonic Frequency Banded (HFB) signal. The THz phase extraction algorithms are then developed based combinations of 2 or all of these 3 FB signals with efficient algorithms such as Levenberg-Marquardt nonlinear fitting algorithm. Numerical simulation has also been performed in Matlab with simulated THz-MIDR interferometric signal of various Signal to Noise Ratio (SNR) to verify the algorithms.

Keywords: algorithm, modulation, THz phase, THz interferometry doppler radar

Procedia PDF Downloads 348
26700 Operative Tips of Strattice Based Breast Reconstruction

Authors: Cho Ee Ng, Hazem Khout, Tarannum Fasih

Abstract:

Acellular dermal matrices are increasingly used to reinforce the lower pole of the breast during implant breast reconstruction. There is no standard technique described in literature for the use of this product. In this article, we share our operative method of fixation.

Keywords: strattice, acellular dermal matric, breast reconstruction, implant

Procedia PDF Downloads 399
26699 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration

Authors: Matthew Yeager, Christopher Willy, John Bischoff

Abstract:

The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.

Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design

Procedia PDF Downloads 189
26698 Photovoltaic Modules Fault Diagnosis Using Low-Cost Integrated Sensors

Authors: Marjila Burhanzoi, Kenta Onohara, Tomoaki Ikegami

Abstract:

Faults in photovoltaic (PV) modules should be detected to the greatest extent as early as possible. For that conventional fault detection methods such as electrical characterization, visual inspection, infrared (IR) imaging, ultraviolet fluorescence and electroluminescence (EL) imaging are used, but they either fail to detect the location or category of fault, or they require expensive equipment and are not convenient for onsite application. Hence, these methods are not convenient to use for monitoring small-scale PV systems. Therefore, low cost and efficient inspection techniques with the ability of onsite application are indispensable for PV modules. In this study in order to establish efficient inspection technique, correlation between faults and magnetic flux density on the surface is of crystalline PV modules are investigated. Magnetic flux on the surface of normal and faulted PV modules is measured under the short circuit and illuminated conditions using two different sensor devices. One device is made of small integrated sensors namely 9-axis motion tracking sensor with a 3-axis electronic compass embedded, an IR temperature sensor, an optical laser position sensor and a microcontroller. This device measures the X, Y and Z components of the magnetic flux density (Bx, By and Bz) few mm above the surface of a PV module and outputs the data as line graphs in LabVIEW program. The second device is made of a laser optical sensor and two magnetic line sensor modules consisting 16 pieces of magnetic sensors. This device scans the magnetic field on the surface of PV module and outputs the data as a 3D surface plot of the magnetic flux intensity in a LabVIEW program. A PC equipped with LabVIEW software is used for data acquisition and analysis for both devices. To show the effectiveness of this method, measured results are compared to those of a normal reference module and their EL images. Through the experiments it was confirmed that the magnetic field in the faulted areas have different profiles which can be clearly identified in the measured plots. Measurement results showed a perfect correlation with the EL images and using position sensors it identified the exact location of faults. This method was applied on different modules and various faults were detected using it. The proposed method owns the ability of on-site measurement and real-time diagnosis. Since simple sensors are used to make the device, it is low cost and convenient to be sued by small-scale or residential PV system owners.

Keywords: fault diagnosis, fault location, integrated sensors, PV modules

Procedia PDF Downloads 224
26697 Research on Routing Protocol in Ship Dynamic Positioning Based on WSN Clustering Data Fusion System

Authors: Zhou Mo, Dennis Chow

Abstract:

In the dynamic positioning system (DPS) for vessels, the reliable information transmission between each note basically relies on the wireless protocols. From the perspective of cluster-based routing pro-tocols for wireless sensor networks, the data fusion technology based on the sleep scheduling mechanism and remaining energy in network layer is proposed, which applies the sleep scheduling mechanism to the routing protocols, considering the remaining energy of node and location information when selecting cluster-head. The problem of uneven distribution of nodes in each cluster is solved by the Equilibrium. At the same time, Classified Forwarding Mechanism as well as Redelivery Policy strategy is adopted to avoid congestion in the transmission of huge amount of data, reduce the delay in data delivery and enhance the real-time response. In this paper, a simulation test is conducted to improve the routing protocols, which turns out to reduce the energy consumption of nodes and increase the efficiency of data delivery.

Keywords: DPS for vessel, wireless sensor network, data fusion, routing protocols

Procedia PDF Downloads 470