Search results for: human machine interface
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11674

Search results for: human machine interface

6904 Multi-Response Optimization of EDM for Ti-6Al-4V Using Taguchi-Grey Relational Analysis

Authors: Ritesh Joshi, Kishan Fuse, Gopal Zinzala, Nishit Nirmal

Abstract:

Ti-6Al-4V is a titanium alloy having high strength, low weight and corrosion resistant which is a required characteristic for a material to be used in aerospace industry. Titanium, being a hard alloy is difficult to the machine via conventional methods, so it is a call to use non-conventional processes. In present work, the effects on Ti-6Al-4V by drilling a hole of Ø 6 mm using copper (99%) electrode in Electric Discharge Machining (EDM) process is analyzed. Effect of various input parameters like peak current, pulse-on time and pulse-off time on output parameters viz material removal rate (MRR) and electrode wear rate (EWR) is studied. Multi-objective optimization technique Grey relational analysis is used for process optimization. Experiments are designed using an L9 orthogonal array. ANOVA is used for finding most contributing parameter followed by confirmation tests for validating the results. Improvement of 7.45% in gray relational grade is observed.

Keywords: ANOVA, electric discharge machining, grey relational analysis, Ti-6Al-4V

Procedia PDF Downloads 350
6903 Substantial Fatigue Similarity of a New Small-Scale Test Rig to Actual Wheel-Rail System

Authors: Meysam Naeimi, Zili Li, Roumen Petrov, Rolf Dollevoet, Jilt Sietsma, Jun Wu

Abstract:

The substantial similarity of fatigue mechanism in a new test rig for rolling contact fatigue (RCF) has been investigated. A new reduced-scale test rig is designed to perform controlled RCF tests in wheel-rail materials. The fatigue mechanism of the rig is evaluated in this study using a combined finite element-fatigue prediction approach. The influences of loading conditions on fatigue crack initiation have been studied. Furthermore, the effects of some artificial defects (squat-shape) on fatigue lives are examined. To simulate the vehicle-track interaction by means of the test rig, a three-dimensional finite element (FE) model is built up. The nonlinear material behaviour of the rail steel is modelled in the contact interface. The results of FE simulations are combined with the critical plane concept to determine the material points with the greatest possibility of fatigue failure. Based on the stress-strain responses, by employing of previously postulated criteria for fatigue crack initiation (plastic shakedown and ratchetting), fatigue life analysis is carried out. The results are reported for various loading conditions and different defect sizes. Afterward, the cyclic mechanism of the test rig is evaluated from the operational viewpoint. The results of fatigue life predictions are compared with the expected number of cycles of the test rig by its cyclic nature. Finally, the estimative duration of the experiments until fatigue crack initiation is roughly determined.

Keywords: fatigue, test rig, crack initiation, life, rail, squats

Procedia PDF Downloads 500
6902 Road Accidents Bigdata Mining and Visualization Using Support Vector Machines

Authors: Usha Lokala, Srinivas Nowduri, Prabhakar K. Sharma

Abstract:

Useful information has been extracted from the road accident data in United Kingdom (UK), using data analytics method, for avoiding possible accidents in rural and urban areas. This analysis make use of several methodologies such as data integration, support vector machines (SVM), correlation machines and multinomial goodness. The entire datasets have been imported from the traffic department of UK with due permission. The information extracted from these huge datasets forms a basis for several predictions, which in turn avoid unnecessary memory lapses. Since data is expected to grow continuously over a period of time, this work primarily proposes a new framework model which can be trained and adapt itself to new data and make accurate predictions. This work also throws some light on use of SVM’s methodology for text classifiers from the obtained traffic data. Finally, it emphasizes the uniqueness and adaptability of SVMs methodology appropriate for this kind of research work.

Keywords: support vector mechanism (SVM), machine learning (ML), support vector machines (SVM), department of transportation (DFT)

Procedia PDF Downloads 255
6901 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland

Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski

Abstract:

PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.

Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks

Procedia PDF Downloads 130
6900 A Review of the Potential Impact of Employer Branding on Employee

Authors: K. V. N. K. C. Sharma

Abstract:

Globalization, coupled with increase in competition is compelling organizations to adopt innovative strategies and identify core competencies in order to distinguish themselves from the competition. The capability of an organization is no longer determined by their products or services alone. The intellectual assets and quality of the human resource are fast emerging as key differentiators. Corporations are now positioning themselves as ‘brands’ not solely to market their products and services, but also to lure and to retain the best talent in the business. This paper identifies leadership as the ‘key element’ in developing an organization’s brand, which has a significant influence on the employee’s eventual perception of this external brand as portrayed by the organization. External branding incorporates innovation, consumer concern, trust, quality and sustainability. The paper contends that employees are indeed an organization’s ‘brand ambassadors. Internal branding involves taking care of these ambassadors of corporate brand i.e. human resource. If employees of an organization are not exposed to the organization’s branding (an ongoing process that functionally aligns, motivates and empower employees at all levels to consistently provide a satisfying customer experience), the external brand could be jeopardized. Internal branding, on the other hand, refers to employee’s perception of the organization’s brand. The current business environment can at best, be termed as volatile. Employees with the right technical and behavioral skills remain a scarce resource and the employers need to be ready to capture the attention, interest and commitment of the best and brightest candidates. This paper attempts to review and understand the relationship between employer branding and employee retention. The paper also seeks to identify potential impact of employer branding across all the factors affecting employees.

Keywords: external branding, organisation personnel, internal branding, leadership

Procedia PDF Downloads 225
6899 Air Pollution: The Journey from Single Particle Characterization to in vitro Fate

Authors: S. Potgieter-Vermaak, N. Bain, A. Brown, K. Shaw

Abstract:

It is well-known from public news media that air pollution is a health hazard and is responsible for early deaths. The quantification of the relationship between air quality and health is a probing question not easily answered. It is known that airborne particulate matter (APM) <2.5µm deposits in the tracheal and alveoli zones and our research probes the possibility of quantifying pulmonary injury by linking reactive oxygen species (ROS) in these particles to DNA damage. Currently, APM mass concentration is linked to early deaths and limited studies probe the influence of other properties on human health. To predict the full extent and type of impact, particles need to be characterised for chemical composition and structure. APMs are routinely analysed for their bulk composition, but of late analysis on a micro level probing single particle character, using micro-analytical techniques, are considered. The latter, single particle analysis (SPA), permits one to obtain detailed information on chemical character from nano- to micron-sized particles. This paper aims to provide a snapshot of studies using data obtained from chemical characterisation and its link with in-vitro studies to inform on personal health risks. For this purpose, two studies will be compared, namely, the bioaccessibility of the inhalable fraction of urban road dust versus total suspended solids (TSP) collected in the same urban environment. The significant influence of metals such as Cu and Fe in TSP on DNA damage is illustrated. The speciation of Hg (determined by SPA) in different urban environments proved to dictate its bioaccessibility in artificial lung fluids rather than its concentration.

Keywords: air pollution, human health, in-vitro studies, particulate matter

Procedia PDF Downloads 213
6898 Machinability Study of A201-T7 Alloy

Authors: Onan Kilicaslan, Anil Kabaklarli, Levent Subasi, Erdem Bektas, Rifat Yilmaz

Abstract:

The Aluminum-Copper casting alloys are well known for their high mechanical strength, especially when compared to more commonly used Aluminum-Silicon alloys. A201 is one of the best in terms of strength vs. weight ratio among other aluminum alloys, which makes it suitable for premium quality casting applications in aerospace and automotive industries. It is reported that A201 has low castability, but it is easy to machine. However, there is a need to specifically determine the process window for feasible machining. This research investigates the machinability of A201 alloy after T7 heat treatment in terms of chip/burr formation, surface roughness, hardness, and microstructure. The samples are cast with low-pressure sand casting method and milling experiments are performed with uncoated carbide tools using different cutting speeds and feeds. Statistical analysis is used to correlate the machining parameters to surface integrity. It is found that there is a strong dependence of the cutting conditions on machinability and a process window is determined.

Keywords: A201-T7, machinability, milling, surface integrity

Procedia PDF Downloads 180
6897 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning

Authors: Akeel A. Shah, Tong Zhang

Abstract:

Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.

Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning

Procedia PDF Downloads 9
6896 Metrology-Inspired Methods to Assess the Biases of Artificial Intelligence Systems

Authors: Belkacem Laimouche

Abstract:

With the field of artificial intelligence (AI) experiencing exponential growth, fueled by technological advancements that pave the way for increasingly innovative and promising applications, there is an escalating need to develop rigorous methods for assessing their performance in pursuit of transparency and equity. This article proposes a metrology-inspired statistical framework for evaluating bias and explainability in AI systems. Drawing from the principles of metrology, we propose a pioneering approach, using a concrete example, to evaluate the accuracy and precision of AI models, as well as to quantify the sources of measurement uncertainty that can lead to bias in their predictions. Furthermore, we explore a statistical approach for evaluating the explainability of AI systems based on their ability to provide interpretable and transparent explanations of their predictions.

Keywords: artificial intelligence, metrology, measurement uncertainty, prediction error, bias, machine learning algorithms, probabilistic models, interlaboratory comparison, data analysis, data reliability, measurement of bias impact on predictions, improvement of model accuracy and reliability

Procedia PDF Downloads 92
6895 Fast Fourier Transform-Based Steganalysis of Covert Communications over Streaming Media

Authors: Jinghui Peng, Shanyu Tang, Jia Li

Abstract:

Steganalysis seeks to detect the presence of secret data embedded in cover objects, and there is an imminent demand to detect hidden messages in streaming media. This paper shows how a steganalysis algorithm based on Fast Fourier Transform (FFT) can be used to detect the existence of secret data embedded in streaming media. The proposed algorithm uses machine parameter characteristics and a network sniffer to determine whether the Internet traffic contains streaming channels. The detected streaming data is then transferred from the time domain to the frequency domain through FFT. The distributions of power spectra in the frequency domain between original VoIP streams and stego VoIP streams are compared in turn using t-test, achieving the p-value of 7.5686E-176 which is below the threshold. The results indicate that the proposed FFT-based steganalysis algorithm is effective in detecting the secret data embedded in VoIP streaming media.

Keywords: steganalysis, security, Fast Fourier Transform, streaming media

Procedia PDF Downloads 129
6894 Intelligent Algorithm-Based Tool-Path Planning and Optimization for Additive Manufacturing

Authors: Efrain Rodriguez, Sergio Pertuz, Cristhian Riano

Abstract:

Tool-path generation is an essential step in the FFF (Fused Filament Fabrication)-based Additive Manufacturing (AM) process planning. In the manufacture of a mechanical part by using additive processes, high resource consumption and prolonged production times are inherent drawbacks of these processes mainly due to non-optimized tool-path generation. In this work, we propose a heuristic-search intelligent algorithm-based approach for optimized tool-path generation for FFF-based AM. The main benefit of this approach is a significant reduction of travels without material deposition when the AM machine performs moves without any extrusion. The optimization method used reduces the number of travels without extrusion in comparison with commercial software as Slic3r or Cura Engine, which means a reduction of production time.

Keywords: additive manufacturing, tool-path optimization, fused filament fabrication, process planning

Procedia PDF Downloads 430
6893 Tree Species Classification Using Effective Features of Polarimetric SAR and Hyperspectral Images

Authors: Milad Vahidi, Mahmod R. Sahebi, Mehrnoosh Omati, Reza Mohammadi

Abstract:

Forest management organizations need information to perform their work effectively. Remote sensing is an effective method to acquire information from the Earth. Two datasets of remote sensing images were used to classify forested regions. Firstly, all of extractable features from hyperspectral and PolSAR images were extracted. The optical features were spectral indexes related to the chemical, water contents, structural indexes, effective bands and absorption features. Also, PolSAR features were the original data, target decomposition components, and SAR discriminators features. Secondly, the particle swarm optimization (PSO) and the genetic algorithms (GA) were applied to select optimization features. Furthermore, the support vector machine (SVM) classifier was used to classify the image. The results showed that the combination of PSO and SVM had higher overall accuracy than the other cases. This combination provided overall accuracy about 90.56%. The effective features were the spectral index, the bands in shortwave infrared (SWIR) and the visible ranges and certain PolSAR features.

Keywords: hyperspectral, PolSAR, feature selection, SVM

Procedia PDF Downloads 401
6892 Allergenic Potential of Airborne Algae Isolated from Malaysia

Authors: Chu Wan-Loy, Kok Yih-Yih, Choong Siew-Ling

Abstract:

The human health risks due to poor air quality caused by a wide array of microorganisms have attracted much interest. Airborne algae have been reported as early as 19th century and they can be found in the air of tropic and warm atmospheres. Airborne algae normally originate from water surfaces, soil, trees, buildings and rock surfaces. It is estimated that at least 2880 algal cells are inhaled per day by human. However, there are relatively little data published on airborne algae and its related adverse health effects except sporadic reports of algae associated clinical allergenicity. A collection of airborne algae cultures has been established following a recent survey on the occurrence of airborne algae in indoor and outdoor environments in Kuala Lumpur. The aim of this study was to investigate the allergenic potential of the isolated airborne green and blue-green algae, namely Scenedesmus sp., Cylindrospermum sp. and Hapalosiphon sp.. The suspensions of freeze-dried airborne algae were adminstered into balb-c mice model through intra-nasal route to determine their allergenic potential. Results showed that Scenedesmus sp. (1 mg/mL) increased the systemic Ig E levels in mice by 3-8 fold compared to pre-treatment. On the other hand, Cylindrospermum sp. and Hapalosiphon sp. at similar concentration caused the Ig E to increase by 2-4 fold. The potential of airborne algae causing Ig E mediated type 1 hypersensitivity was elucidated using other immunological markers such as cytokine interleukin (IL)- 4, 5, 6 and interferon-ɣ. When we compared the amount of interleukins in mouse serum between day 0 and day 53 (day of sacrifice), Hapalosiphon sp. (1mg/mL) increased the expression of IL4 and 6 by 8 fold while the Cylindrospermum sp. (1mg/mL) increased the expression of IL4 and IFɣ by 8 and 2 fold respectively. In conclusion, repeated exposure to the three selected airborne algae may stimulate the immune response and generate Ig E in a mouse model.

Keywords: airborne algae, respiratory, allergenic, immune response, Malaysia

Procedia PDF Downloads 223
6891 Experimental Study on the Preparation of Pelletizing of the Panzhihua's Fine Ilmenite Concentrate

Authors: Han Kexi, Lv Xuewei, Song Bing

Abstract:

This paper focuses on the preparation of pelletizing with the Panzhihua ilmenite concentrate to satisfy the requirement of smelting titania slag. The effects of the moisture content, mixing time of raw materials, pressure of pellet, roller rotating speed of roller, drying temperature and time on the pelletizing yield and compressive strength were investigated. The experimental results show that the moister content was controlled at 2.0%~2.5%, mixing time at 20 min, the pressure of the ball forming machine at 13~15 mpa, the pelletizing yield can reach up 85%. When the roller rotating speed is 6~8 r/min while the drying temperature and time respectively is 350 ℃ and 40~60 min, the compressive strength of pelletizing more than 1500 N. The preparation of pelletizing can meet the requirement of smelting titania slag.

Keywords: Panzhihua fine ilmenite concentrate, pelletizing, pelletizing yield, compressive strength, drying

Procedia PDF Downloads 202
6890 Identification of Promising Infant Clusters to Obtain Improved Block Layout Designs

Authors: Mustahsan Mir, Ahmed Hassanin, Mohammed A. Al-Saleh

Abstract:

The layout optimization of building blocks of unequal areas has applications in many disciplines including VLSI floorplanning, macrocell placement, unequal-area facilities layout optimization, and plant or machine layout design. A number of heuristics and some analytical and hybrid techniques have been published to solve this problem. This paper presents an efficient high-quality building-block layout design technique especially suited for solving large-size problems. The higher efficiency and improved quality of optimized solutions are made possible by introducing the concept of Promising Infant Clusters in a constructive placement procedure. The results presented in the paper demonstrate the improved performance of the presented technique for benchmark problems in comparison with published heuristic, analytic, and hybrid techniques.

Keywords: block layout problem, building-block layout design, CAD, optimization, search techniques

Procedia PDF Downloads 371
6889 Linking Metabolism, Pluripotency and Epigenetic Changes during Early Differentiation of Embryonic Stem Cells

Authors: Arieh Moussaieff, Bénédicte Elena-Herrmann, Yaakov Nahmias, Daniel Aberdam

Abstract:

Differentiation of pluripotent stem cells is a slow process, marked by the gradual loss of pluripotency factors over days in culture. While the first few days of differentiation show minor changes in the cellular transcriptome, intracellular signaling pathways remain largely unknown. Recently, several groups demonstrated that the metabolism of pluripotent mouse and human cells is different from that of somatic cells, showing a marked increase in glycolysis previously identified in cancer as the Warburg effect. Here, we sought to identify the earliest metabolic changes induced at the first hours of differentiation. High-resolution NMR analysis identified 35 metabolites and a distinct, gradual transition in metabolism during early differentiation. Metabolic and transcriptional analyses showed the induction of glycolysis toward acetate and acetyl-coA in pluripotent cells, and an increase in cholesterol biosynthesis during early differentiation. Importantly, this metabolic pathway regulated differentiation of human and mouse embryonic stem cells. Acetate delayed differentiation preventing differentiation-induced histone de-acetylation in a dose-dependent manner. Glycolytic inhibitors upstream of acetate caused differentiation of pluripotent cells, while those downstream delayed differentiation. Our data suggests that a rapid loss of glycolysis in early differentiation down-regulates acetate and acetyl-coA production, causing a loss of histone acetylation and concomitant loss of pluripotency. It demonstrate that pluripotent stem cells utilize a novel metabolism pathway to maintain pluripotency through acetate/acetyl-coA and highlights the important role metabolism plays in pluripotency and early differentiation of stem cells.

Keywords: pluripotency, metabolomics, epigenetics, acetyl-coA

Procedia PDF Downloads 456
6888 A Study of Serum Beta 2-Microglobulin (β2M) and Lipid Bound Sialic Acid (LSA) Levels in Oral Carcinoma Patients

Authors: Kapoor Anurag, Sharma Pradeep, Mittal K Kailash, Kumar Ajai, Jawad Kalbe, Amit Kumar Singh

Abstract:

Background: Oral squamous cell carcinoma (OSCC) is the most prevalent malignant tumour on a global scale. Limited research has been conducted on tumour markers in oral cancer, and additional evaluation is required for several tumour producers that show clinical promise. The present study aimed to find out the co-relation of β-2 Microglobulin and Lipid Bound Sialic Acid in oral carcinoma patients. Methodology: The present case-control study was carried out on 35 patients with histopathologically confirmed OSCC and 35 age-matched controls. Serum concentrations of 2-Microglobulin and Total Sialic Acid (TSA) in the participants were determined via ELISA and spectrophotometric technique, respectively. Results: The OSCC group consisted of 20 males and 15 females, with an average age of 58 years, while the control group comprised 18 males and 17 females, with an average age of 55 years. Elevated levels of β2-microglobulin (3.87±0.12) and LSA (73.57±2.42) were observed in OSCC patients compared to controls (2.25±0.18; 65.21±2.06, respectively). Further examination based on smoking status revealed a significant increase in both β2-microglobulin and LSA levels among smokers compared to non-smokers (p < 0.05). Conclusion: The study suggests a notable association between higher levels of β2-microglobulin and LSA in oral squamous cell carcinoma (OSCC) patients who smoke compared to non-smokers. This observation leads to a hypothesis that this disparity could potentially serve as a significant contributing factor to the advancement of oral cancer.

Keywords: biochemistry human cancer, human, oral carcinoma, marker

Procedia PDF Downloads 27
6887 A Survey of Sentiment Analysis Based on Deep Learning

Authors: Pingping Lin, Xudong Luo, Yifan Fan

Abstract:

Sentiment analysis is a very active research topic. Every day, Facebook, Twitter, Weibo, and other social media, as well as significant e-commerce websites, generate a massive amount of comments, which can be used to analyse peoples opinions or emotions. The existing methods for sentiment analysis are based mainly on sentiment dictionaries, machine learning, and deep learning. The first two kinds of methods rely on heavily sentiment dictionaries or large amounts of labelled data. The third one overcomes these two problems. So, in this paper, we focus on the third one. Specifically, we survey various sentiment analysis methods based on convolutional neural network, recurrent neural network, long short-term memory, deep neural network, deep belief network, and memory network. We compare their futures, advantages, and disadvantages. Also, we point out the main problems of these methods, which may be worthy of careful studies in the future. Finally, we also examine the application of deep learning in multimodal sentiment analysis and aspect-level sentiment analysis.

Keywords: document analysis, deep learning, multimodal sentiment analysis, natural language processing

Procedia PDF Downloads 147
6886 The Survival of Bifidobacterium longum in Frozen Yoghurt Ice Cream and Its Properties Affected by Prebiotics (Galacto-Oligosaccharides and Fructo-Oligosaccharides) and Fat Content

Authors: S. Thaiudom, W. Toommuangpak

Abstract:

Yoghurt ice cream (YIC) containing prebiotics and probiotics seems to be much more recognized among consumers who concern for their health. Not only can it be a benefit on consumers’ health but also its taste and freshness provide people easily accept. However, the survival of such probiotic especially Bifidobacterium longum, found in human gastrointestinal tract and to be benefit to human gut, was still needed to study in the severe condition as whipping and freezing in ice cream process. Low and full-fat yoghurt ice cream containing 2 and 10% (w/w) fat content (LYIC and FYIC), respectively was produced by mixing 20% yoghurt containing B. longum into milk ice cream mix. Fructo-oligosaccharides (FOS) or galacto-oligosaccharides (GOS) at 0, 1, and 2% (w/w) were separately used as prebiotic in order to improve the survival of B. longum. Survival of this bacteria as a function of ice cream storage time and ice cream properties were investigated. The results showed that prebiotic; especially FOS could improve viable count of B. longum. The more concentration of prebiotic used, the more is the survival of B. Longum. These prebiotics could prolong the survival of B. longum up to 60 days, and the amount of survival number was still in the recommended level (106 cfu per gram). Fat content and prebiotic did not significantly affect the total acidity and the overrun of all samples, but an increase of fat content significantly increased the fat particle size which might be because of partial coalescence found in FYIC rather than in LYIC. However, addition of GOS or FOS could reduce the fat particle size, especially in FYIC. GOS seemed to reduce the hardness of YIC rather than FOS. High fat content (10% fat) significantly influenced on lowering the melting rate of YIC better than 2% fat content due to the 3-dimension networks of fat partial coalescence theoretically occurring more in FYIC than in LYIC. However, FOS seemed to retard the melting rate of ice cream better than GOS. In conclusion, GOS and FOS in YIC with different fat content can enhance the survival of B. longum and affect physical and chemical properties of such yoghurt ice cream.

Keywords: Bifidobacterium longum, prebiotic, survival, yoghurt ice cream

Procedia PDF Downloads 142
6885 Traffic Light Detection Using Image Segmentation

Authors: Vaishnavi Shivde, Shrishti Sinha, Trapti Mishra

Abstract:

Traffic light detection from a moving vehicle is an important technology both for driver safety assistance functions as well as for autonomous driving in the city. This paper proposed a deep-learning-based traffic light recognition method that consists of a pixel-wise image segmentation technique and a fully convolutional network i.e., UNET architecture. This paper has used a method for detecting the position and recognizing the state of the traffic lights in video sequences is presented and evaluated using Traffic Light Dataset which contains masked traffic light image data. The first stage is the detection, which is accomplished through image processing (image segmentation) techniques such as image cropping, color transformation, segmentation of possible traffic lights. The second stage is the recognition, which means identifying the color of the traffic light or knowing the state of traffic light which is achieved by using a Convolutional Neural Network (UNET architecture).

Keywords: traffic light detection, image segmentation, machine learning, classification, convolutional neural networks

Procedia PDF Downloads 153
6884 Polishing Machine Based on High-Pressure Water Jet

Authors: Mohammad A. Khasawneh

Abstract:

The design of high pressure water jet based polishing equipment and its fabrication conducted in this study is reported herein, together with some preliminary test results for assessing its applicability for HMA surface polishing. This study also provides preliminary findings concerning the test variables, such as the rotational speed, the water jet pressure, the abrasive agent used, and the impact angel that were experimentally investigated in this study. The preliminary findings based on four trial tests (two on large slab specimens and two on small size gyratory compacted specimens), however, indicate that both friction and texture values tend to increase with the polishing durations for two combinations of pressure and rotation speed of the rotary deck. It seems that the more polishing action the specimen is subjected to; the aggregate edges are created such that the surface texture values are increased with the accompanied increase in friction values. It may be of interest (but which is outside the scope of this study) to investigate if the similar trend exist for HMA prepared with aggregate source that is sand and gravel.

Keywords: high-pressure, water jet, friction, texture, polishing, statistical analysis

Procedia PDF Downloads 475
6883 Effect of Acid-Basic Treatments of Lingocellulosic Material Forest Wastes Wild Carob on Ethyl Violet Dye Adsorption

Authors: Abdallah Bouguettoucha, Derradji Chebli, Tariq Yahyaoui, Hichem Attout

Abstract:

The effect of acid -basic treatment of lingocellulosic material (forest wastes wild carob) on Ethyl violet adsorption was investigated. It was found that surface chemistry plays an important role in Ethyl violet (EV) adsorption. HCl treatment produces more active acidic surface groups such as carboxylic and lactone, resulting in an increase in the adsorption of EV dye. The adsorption efficiency was higher for treated of lingocellulosic material with HCl than for treated with KOH. Maximum biosorption capacity was 170 and 130 mg/g, for treated of lingocellulosic material with HCl than for treated with KOH at pH 6 respectively. It was also found that the time to reach equilibrium takes less than 25 min for both treated materials. The adsorption of basic dye (i.e., ethyl violet or basic violet 4) was carried out by varying some process parameters, such as initial concentration, pH and temperature. The adsorption process can be well described by means of a pseudo-second-order reaction model showing that boundary layer resistance was not the rate-limiting step, as confirmed by intraparticle diffusion since the linear plot of Qt versus t^0.5 did not pass through the origin. In addition, experimental data were accurately expressed by the Sips equation if compared with the Langmuir and Freundlich isotherms. The values of ΔG° and ΔH° confirmed that the adsorption of EV on acid-basic treated forest wast wild carob was spontaneous and endothermic in nature. The positive values of ΔS° suggested an irregular increase of the randomness at the treated lingocellulosic material -solution interface during the adsorption process.

Keywords: adsorption, isotherm models, thermodynamic parameters, wild carob

Procedia PDF Downloads 263
6882 Identification of Outliers in Flood Frequency Analysis: Comparison of Original and Multiple Grubbs-Beck Test

Authors: Ayesha S. Rahman, Khaled Haddad, Ataur Rahman

Abstract:

At-site flood frequency analysis is used to estimate flood quantiles when at-site record length is reasonably long. In Australia, FLIKE software has been introduced for at-site flood frequency analysis. The advantage of FLIKE is that, for a given application, the user can compare a number of most commonly adopted probability distributions and parameter estimation methods relatively quickly using a windows interface. The new version of FLIKE has been incorporated with the multiple Grubbs and Beck test which can identify multiple numbers of potentially influential low flows. This paper presents a case study considering six catchments in eastern Australia which compares two outlier identification tests (original Grubbs and Beck test and multiple Grubbs and Beck test) and two commonly applied probability distributions (Generalized Extreme Value (GEV) and Log Pearson type 3 (LP3)) using FLIKE software. It has been found that the multiple Grubbs and Beck test when used with LP3 distribution provides more accurate flood quantile estimates than when LP3 distribution is used with the original Grubbs and Beck test. Between these two methods, the differences in flood quantile estimates have been found to be up to 61% for the six study catchments. It has also been found that GEV distribution (with L moments) and LP3 distribution with the multiple Grubbs and Beck test provide quite similar results in most of the cases; however, a difference up to 38% has been noted for flood quantiles for annual exceedance probability (AEP) of 1 in 100 for one catchment. These findings need to be confirmed with a greater number of stations across other Australian states.

Keywords: floods, FLIKE, probability distributions, flood frequency, outlier

Procedia PDF Downloads 434
6881 Investigation of Chip Formation Characteristics during Surface Finishing of HDPE Samples

Authors: M. S. Kaiser, S. Reaz Ahmed

Abstract:

Chip formation characteristics are investigated during surface finishing of high density polyethylene (HDPE) samples using a shaper machine. Both the cutting speed and depth of cut are varied continually to enable observations under various machining conditions. The generated chips are analyzed in terms of their shape, size, and deformation. Their physical appearances are also observed using digital camera and optical microscope. The investigation shows that continuous chips are obtained for all the cutting conditions. It is observed that cutting speed is more influential than depth of cut to cause dimensional changes of chips. Chips curl radius is also found to increase gradually with the increase of cutting speed. The length of continuous chips remains always smaller than the job length, and the corresponding discrepancies are found to be more prominent at lower cutting speed. Microstructures of the chips reveal that cracks are formed at higher cutting speeds and depth of cuts, which is not that significant at low depth of cut.

Keywords: HDPE, surface-finishing, chip formation, deformation, roughness

Procedia PDF Downloads 136
6880 Fine Grained Action Recognition of Skateboarding Tricks

Authors: Frederik Calsius, Mirela Popa, Alexia Briassouli

Abstract:

In the field of machine learning, it is common practice to use benchmark datasets to prove the working of a method. The domain of action recognition in videos often uses datasets like Kinet-ics, Something-Something, UCF-101 and HMDB-51 to report results. Considering the properties of the datasets, there are no datasets that focus solely on very short clips (2 to 3 seconds), and on highly-similar fine-grained actions within one specific domain. This paper researches how current state-of-the-art action recognition methods perform on a dataset that consists of highly similar, fine-grained actions. To do so, a dataset of skateboarding tricks was created. The performed analysis highlights both benefits and limitations of state-of-the-art methods, while proposing future research directions in the activity recognition domain. The conducted research shows that the best results are obtained by fusing RGB data with OpenPose data for the Temporal Shift Module.

Keywords: activity recognition, fused deep representations, fine-grained dataset, temporal modeling

Procedia PDF Downloads 213
6879 Infection Control Drill: To Assess the Readiness and Preparedness of Staffs in Managing Suspected Ebola Patients in Tan Tock Seng Hospital Emergency Department

Authors: Le Jiang, Chua Jinxing

Abstract:

Introduction: The recent outbreak of Ebola virus disease in the west Africa has drawn global concern. With a high fatality rate and direct human-to-human transmission, it has spread between countries and caused great damages for patients and family who are affected. Being the designated hospital to manage epidemic outbreak in Singapore, Tan Tock Seng Hospital (TTSH) is facing great challenges in preparation and managing of potential outbreak of emerging infectious disease such as Ebola virus disease. Aim: We conducted an infection control drill in TTSH emergency department to assess the readiness of healthcare and allied health workers in managing suspected Ebola patients. It also helps to review current Ebola clinical protocol and work instruction to ensure more smooth and safe practice in managing Ebola patients in TTSH emergency department. Result: General preparedness level of staffs involved in managing Ebola virus disease in TTSH emergency department is not adequate. Knowledge deficits of staffs on Ebola personal protective equipment gowning and degowning process increase the risk of potential cross contamination in patient care. Loopholes are also found in current clinical protocol, such as unclear instructions and inaccurate information, which need to be revised to promote better staff performance in patient management. Logistic issues such as equipment dysfunction and inadequate supplies can lead to ineffective communication among teams and causing harm to patients in emergency situation. Conclusion: The infection control drill identified the need for more well-structured and clear clinical protocols to be in place to promote participants performance. In addition to quality protocols and guidelines, systemic training and annual refresher for all staffs in the emergency department are essential to prepare staffs for the outbreak of Ebola virus disease. Collaboration and communication with allied health staffs are also crucial for smooth delivery of patient care and minimising the potential human suffering, properties loss or injuries caused by disease. Therefore, more clinical drills with collaboration among various departments involved are recommended to be conducted in the future to monitor and assess readiness of TTSH emergency department in managing Ebola virus disease.

Keywords: ebola, emergency department, infection control drill, Tan Tock Seng Hospital

Procedia PDF Downloads 104
6878 The Impact of Failure-tolerant Restaurant Culture on Curbing Employees’ Withdrawal Behavior: The Roles of Psychological Empowerment and Mindful Leadership

Authors: Omar Alsetoohy, Mohamed Ezzat, Mahmoud Abou Kamar

Abstract:

The success of a restaurant or hotel depends very much on the quality and quantity of its human resources. Thus, establishing a competitive edge through human assets requires careful attention to the practices that best leverage these assets. Usually, hotel or restaurant employees recognize customer defection as an unfavorable or unpleasant occurrence associated with failure. These failures could be in handling, communication, learning, or encouragement. Besides, employees could be afraid of blame from their colleagues and managers, which prevents them from freely discussing these mistakes with them. Such behaviors, in turn, would push employees to withdraw from the workplace. However, we have a good knowledge of the leadership outcomes, but less is known about how and why these effects occur. Accordingly, mindful leaders usually analyze the causes and underlying mechanisms of failures for work improvement. However, despite the excessive literature in the field of leadership and employee behaviors, to date, no research studies had investigated the impact of a failure-tolerant restaurant culture on the employees’ withdrawal behaviors considering the moderating role of psychological empowerment and mindful leadership. Thus, this study seeks to investigate the impact of a failure-tolerant culture on the employees’ withdrawal behaviors in fast-food restaurants in Egypt considering the moderating effects of employee empowerment and mindful leaders. This study may contribute to the existing literature by filling the gap between failure-tolerant cultures and employee withdrawal behaviors in the hospitality literature. The study may also identify the best practices for restaurant operators and managers to deal with employees' failures as an improvement tool for their performance.

Keywords: failure-tolerant culture, employees’ withdrawal behaviors psychological empowerment, mindful leadership, restaurants

Procedia PDF Downloads 94
6877 Artificial Intelligence and Robotics in the Eye of Private Law with Special Regards to Intellectual Property and Liability Issues

Authors: Barna Arnold Keserű

Abstract:

In the last few years (what is called by many scholars the big data era) artificial intelligence (hereinafter AI) get more and more attention from the public and from the different branches of sciences as well. What previously was a mere science-fiction, now starts to become reality. AI and robotics often walk hand in hand, what changes not only the business and industrial life, but also has a serious impact on the legal system. The main research of the author focuses on these impacts in the field of private law, with special regards to liability and intellectual property issues. Many questions arise in these areas connecting to AI and robotics, where the boundaries are not sufficiently clear, and different needs are articulated by the different stakeholders. Recognizing the urgent need of thinking the Committee on Legal Affairs of the European Parliament adopted a Motion for a European Parliament Resolution A8-0005/2017 (of January 27th, 2017) in order to take some recommendations to the Commission on civil law rules on robotics and AI. This document defines some crucial usage of AI and/or robotics, e.g. the field of autonomous vehicles, the human job replacement in the industry or smart applications and machines. It aims to give recommendations to the safe and beneficial use of AI and robotics. However – as the document says – there are no legal provisions that specifically apply to robotics or AI in IP law, but that existing legal regimes and doctrines can be readily applied to robotics, although some aspects appear to call for specific consideration, calls on the Commission to support a horizontal and technologically neutral approach to intellectual property applicable to the various sectors in which robotics could be employed. AI can generate some content what worth copyright protection, but the question came up: who is the author, and the owner of copyright? The AI itself can’t be deemed author because it would mean that it is legally equal with the human persons. But there is the programmer who created the basic code of the AI, or the undertaking who sells the AI as a product, or the user who gives the inputs to the AI in order to create something new. Or AI generated contents are so far from humans, that there isn’t any human author, so these contents belong to public domain. The same questions could be asked connecting to patents. The research aims to answer these questions within the current legal framework and tries to enlighten future possibilities to adapt these frames to the socio-economical needs. In this part, the proper license agreements in the multilevel-chain from the programmer to the end-user become very important, because AI is an intellectual property in itself what creates further intellectual property. This could collide with data-protection and property rules as well. The problems are similar in the field of liability. We can use different existing forms of liability in the case when AI or AI led robotics cause damages, but it is unsure that the result complies with economical and developmental interests.

Keywords: artificial intelligence, intellectual property, liability, robotics

Procedia PDF Downloads 186
6876 2D and 3D Unsteady Simulation of the Heat Transfer in the Sample during Heat Treatment by Moving Heat Source

Authors: Zdeněk Veselý, Milan Honner, Jiří Mach

Abstract:

The aim of the performed work is to establish the 2D and 3D model of direct unsteady task of sample heat treatment by moving source employing computer model on the basis of finite element method. The complex boundary condition on heat loaded sample surface is the essential feature of the task. Computer model describes heat treatment of the sample during heat source movement over the sample surface. It is started from the 2D task of sample cross section as a basic model. Possibilities of extension from 2D to 3D task are discussed. The effect of the addition of third model dimension on the temperature distribution in the sample is showed. Comparison of various model parameters on the sample temperatures is observed. Influence of heat source motion on the depth of material heat treatment is shown for several velocities of the movement. Presented computer model is prepared for the utilization in laser treatment of machine parts.

Keywords: computer simulation, unsteady model, heat treatment, complex boundary condition, moving heat source

Procedia PDF Downloads 376
6875 Rotor Side Speed Control Methods Using MATLAB/Simulink for Wound Induction Motor

Authors: Rajesh Kumar, Roopali Dogra, Puneet Aggarwal

Abstract:

In recent advancements in electric machine and drives, wound rotor motor is extensively used. The merit of using wound rotor induction motor is to control speed/torque characteristics by inserting external resistance. Wound rotor induction motor can be used in the cases such as (a) low inrush current, (b) load requiring high starting torque, (c) lower starting current is required, (d) loads having high inertia, and (e) gradual built up of torque. Examples include conveyers, cranes, pumps, elevators, and compressors. This paper includes speed control of wound induction motor using MATLAB/Simulink for rotor resistance and slip power recovery method. The characteristics of these speed control methods are hence analyzed.

Keywords: MATLAB/Simulink, rotor resistance method, slip power recovery method, wound rotor induction motor

Procedia PDF Downloads 353