Search results for: precision molding
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1052

Search results for: precision molding

422 A Case Study of Ontology-Based Sentiment Analysis for Fan Pages

Authors: C. -L. Huang, J. -H. Ho

Abstract:

Social media has become more and more important in our life. Many enterprises promote their services and products to fans via the social media. The positive or negative sentiment of feedbacks from fans is very important for enterprises to improve their products, services, and promotion activities. The purpose of this paper is to understand the sentiment of the fan’s responses by analyzing the responses posted by fans on Facebook. The entity and aspect of fan’s responses were analyzed based on a predefined ontology. The ontology for cell phone sentiment analysis consists of aspect categories on the top level as follows: overall, shape, hardware, brand, price, and service. Each category consists of several sub-categories. All aspects for a fan’s response were found based on the ontology, and their corresponding sentimental terms were found using lexicon-based approach. The sentimental scores for aspects of fan responses were obtained by summarizing the sentimental terms in responses. The frequency of 'like' was also weighted in the sentimental score calculation. Three famous cell phone fan pages on Facebook were selected as demonstration cases to evaluate performances of the proposed methodology. Human judgment by several domain experts was also built for performance comparison. The performances of proposed approach were as good as those of human judgment on precision, recall and F1-measure.

Keywords: opinion mining, ontology, sentiment analysis, text mining

Procedia PDF Downloads 218
421 Fourier Transform and Machine Learning Techniques for Fault Detection and Diagnosis of Induction Motors

Authors: Duc V. Nguyen

Abstract:

Induction motors are widely used in different industry areas and can experience various kinds of faults in stators and rotors. In general, fault detection and diagnosis techniques for induction motors can be supervised by measuring quantities such as noise, vibration, and temperature. The installation of mechanical sensors in order to assess the health conditions of a machine is typically only done for expensive or load-critical machines, where the high cost of a continuous monitoring system can be Justified. Nevertheless, induced current monitoring can be implemented inexpensively on machines with arbitrary sizes by using current transformers. In this regard, effective and low-cost fault detection techniques can be implemented, hence reducing the maintenance and downtime costs of motors. This work proposes a method for fault detection and diagnosis of induction motors, which combines classical fast Fourier transform and modern/advanced machine learning techniques. The proposed method is validated on real-world data and achieves a precision of 99.7% for fault detection and 100% for fault classification with minimal expert knowledge requirement. In addition, this approach allows users to be able to optimize/balance risks and maintenance costs to achieve the highest bene t based on their requirements. These are the key requirements of a robust prognostics and health management system.

Keywords: fault detection, FFT, induction motor, predictive maintenance

Procedia PDF Downloads 147
420 Trace Analysis of Genotoxic Impurity Pyridine in Sitagliptin Drug Material Using UHPLC-MS

Authors: Bashar Al-Sabti, Jehad Harbali

Abstract:

Background: Pyridine is a reactive base that might be used in preparing sitagliptin. International Agency for Research on Cancer classifies pyridine in group 2B; this classification means that pyridine is possibly carcinogenic to humans. Therefore, pyridine should be monitored at the allowed limit in sitagliptin pharmaceutical ingredients. Objective: The aim of this study was to develop a novel ultra high performance liquid chromatography mass spectrometry (UHPLC-MS) method to estimate the quantity of pyridine impurity in sitagliptin pharmaceutical ingredients. Methods: The separation was performed on C8 shim-pack (150 mm X 4.6 mm, 5 µm) in reversed phase mode using a mobile phase of water-methanol-acetonitrile containing 4 mM ammonium acetate in gradient mode. Pyridine was detected by mass spectrometer using selected ionization monitoring mode at m/z = 80. The flow rate of the method was 0.75 mL/min. Results: The method showed excellent sensitivity with a quantitation limit of 1.5 ppm of pyridine relative to sitagliptin. The linearity of the method was excellent at the range of 1.5-22.5 ppm with a correlation coefficient of 0.9996. Recoveries values were between 93.59-103.55%. Conclusions: The results showed good linearity, precision, accuracy, sensitivity, selectivity, and robustness. The studied method was applied to test three batches of sitagliptin raw materials. Highlights: This method is useful for monitoring pyridine in sitagliptin during its synthesis and testing sitagliptin raw materials before using them in the production of pharmaceutical products.

Keywords: genotoxic impurity, pyridine, sitagliptin, UHPLC -MS

Procedia PDF Downloads 79
419 Multi-Layer Multi-Feature Background Subtraction Using Codebook Model Framework

Authors: Yun-Tao Zhang, Jong-Yeop Bae, Whoi-Yul Kim

Abstract:

Background modeling and subtraction in video analysis has been widely proved to be an effective method for moving objects detection in many computer vision applications. Over the past years, a large number of approaches have been developed to tackle different types of challenges in this field. However, the dynamic background and illumination variations are two of the most frequently occurring issues in the practical situation. This paper presents a new two-layer model based on codebook algorithm incorporated with local binary pattern (LBP) texture measure, targeted for handling dynamic background and illumination variation problems. More specifically, the first layer is designed by block-based codebook combining with LBP histogram and mean values of RGB color channels. Because of the invariance of the LBP features with respect to monotonic gray-scale changes, this layer can produce block-wise detection results with considerable tolerance of illumination variations. The pixel-based codebook is employed to reinforce the precision from the outputs of the first layer which is to eliminate false positives further. As a result, the proposed approach can greatly promote the accuracy under the circumstances of dynamic background and illumination changes. Experimental results on several popular background subtraction datasets demonstrate a very competitive performance compared to previous models.

Keywords: background subtraction, codebook model, local binary pattern, dynamic background, illumination change

Procedia PDF Downloads 199
418 Aflatoxins Characterization in Remedial Plant-Delphinium denudatum by High-Performance Liquid Chromatography–Tandem Mass Spectrometry

Authors: Nadeem A. Siddique, Mohd Mujeeb, Kahkashan

Abstract:

Introduction: The objective of the projected work is to study the occurrence of the aflatoxins B1, B2, G1and G2 in remedial plants, exclusively in Delphinium denudatum. The aflatoxins were analysed by high-performance liquid chromatography–tandem quadrupole mass spectrometry with electrospray ionization (HPLC–MS/MS) and immunoaffinity column chromatography were used for extraction and purification of aflatoxins. PDA media was selected for fungal count. Results: A good quality linear relationship was originated for AFB1, AFB2, AFG1 and AFG2 at 1–10 ppb (r > 0.9995). The analyte precision at three different spiking levels was 88.7–109.1 %, by means of low per cent relative standard deviations in each case. Within 5 to7 min aflatoxins can be separated using an Agilent XDB C18-column. We found that AFB1 and AFB2 were not found in D. denudatum. This was reliable through exceptionally low figures of fungal colonies observed after 6 hr of incubation. The developed analytical method is straightforward, be successfully used to determine the aflatoxins. Conclusion: The developed analytical method is straightforward, simple, accurate, economical and can be successfully used to find out the aflatoxins in remedial plants and consequently to have power over the quality of products. The presence of aflatoxin in the plant extracts was interrelated to the least fungal load in the remedial plants examined.

Keywords: aflatoxins, delphinium denudatum, liquid chromatography, mass spectrometry

Procedia PDF Downloads 190
417 Equation for Predicting Inferior Vena Cava Diameter as a Potential Pointer for Heart Failure Diagnosis among Adult in Azare, Bauchi State, Nigeria

Authors: M. K. Yusuf, W. O. Hamman, U. E. Umana, S. B. Oladele

Abstract:

Background: Dilatation of the inferior vena cava (IVC) is used as the ultrasonic diagnostic feature in patients suspected of congestive heart failure. The IVC diameter has been reported to vary among the various body mass indexes (BMI) and body shape indexes (ABSI). Knowledge of these variations is useful in precision diagnoses of CHF by imaging scientists. Aim: The study aimed to establish an equation for predicting the ultrasonic mean diameter of the IVC among the various BMI/ABSI of inhabitants of Azare, Bauchi State-Nigeria. Methodology: Two hundred physically healthy adult subjects of both sexes were classified into under, normal, over, and obese weights using their BMIs after selection using a structured questionnaire following their informed consent for an abdominal ultrasound scan. The probe was placed on the midline of the body, halfway between the xiphoid process and the umbilicus, with the marker on the probe directed towards the patient's head to obtain a longitudinal view of the IVC. The maximum IVC diameter was measured from the subcostal view using the electronic caliper of the scan machine. The mean value of each group was obtained, and the results were analysed. Results: A novel equation {(IVC Diameter = 1.04 +0.01(X) where X= BMI} has been generated for determining the IVC diameter among the populace. Conclusion: An equation for predicting the IVC diameter from individual BMI values in apparently healthy subjects has been established.

Keywords: equation, ultrasonic, IVC diameter, body adiposities

Procedia PDF Downloads 52
416 The Behaviour of Laterally Loaded Piles Installed in the Sand with Enlarged Bases

Authors: J. Omer, H. Haroglu

Abstract:

Base enlargement in piles was invented to enhance pile resistance in downward loading, but the contribution of an enlarged base to the lateral load resistance of a pile has not been fully exploited or understood. This paper presents a laboratory investigation of the lateral capacity and deformation response of small-scale steel piles with enlarged bases installed in dry sand. Static loading tests were performed on 24 model piles having different base-to-shaft diameter ratios. The piles were installed in a box filled with dry sand, and lateral loads were applied to the pile tops using a pulley system. The test piles had shaft diameters of 20 mm, 16 mm, and 10 mm; base diameters of 900 mm, 700 mm, and 500 mm. As a control, a pile without base enlargement was tested to allow comparisons with the enlarged base piles. Incremental maintained loads were applied until pile failure approached while recording pile head deflections with high-precision dial gauges. The results showed that the lateral capacity increased with an increase in base diameter, albeit by different percentages depending on the shaft diameters and embedment length in the sand. There was always an increase in lateral capacity with increasing embedment length. Also, it was observed that an enlarged pile base had deflected less at a given load when compared to the control pile. Therefore, the research demonstrated the benefits of lateral capacity and stability of enlarging a pile base.

Keywords: pile foundations, enlarged base, lateral loading

Procedia PDF Downloads 121
415 Detecting Earnings Management via Statistical and Neural Networks Techniques

Authors: Mohammad Namazi, Mohammad Sadeghzadeh Maharluie

Abstract:

Predicting earnings management is vital for the capital market participants, financial analysts and managers. The aim of this research is attempting to respond to this query: Is there a significant difference between the regression model and neural networks’ models in predicting earnings management, and which one leads to a superior prediction of it? In approaching this question, a Linear Regression (LR) model was compared with two neural networks including Multi-Layer Perceptron (MLP), and Generalized Regression Neural Network (GRNN). The population of this study includes 94 listed companies in Tehran Stock Exchange (TSE) market from 2003 to 2011. After the results of all models were acquired, ANOVA was exerted to test the hypotheses. In general, the summary of statistical results showed that the precision of GRNN did not exhibit a significant difference in comparison with MLP. In addition, the mean square error of the MLP and GRNN showed a significant difference with the multi variable LR model. These findings support the notion of nonlinear behavior of the earnings management. Therefore, it is more appropriate for capital market participants to analyze earnings management based upon neural networks techniques, and not to adopt linear regression models.

Keywords: earnings management, generalized linear regression, neural networks multi-layer perceptron, Tehran stock exchange

Procedia PDF Downloads 406
414 Laser Data Based Automatic Generation of Lane-Level Road Map for Intelligent Vehicles

Authors: Zehai Yu, Hui Zhu, Linglong Lin, Huawei Liang, Biao Yu, Weixin Huang

Abstract:

With the development of intelligent vehicle systems, a high-precision road map is increasingly needed in many aspects. The automatic lane lines extraction and modeling are the most essential steps for the generation of a precise lane-level road map. In this paper, an automatic lane-level road map generation system is proposed. To extract the road markings on the ground, the multi-region Otsu thresholding method is applied, which calculates the intensity value of laser data that maximizes the variance between background and road markings. The extracted road marking points are then projected to the raster image and clustered using a two-stage clustering algorithm. Lane lines are subsequently recognized from these clusters by the shape features of their minimum bounding rectangle. To ensure the storage efficiency of the map, the lane lines are approximated to cubic polynomial curves using a Bayesian estimation approach. The proposed lane-level road map generation system has been tested on urban and expressway conditions in Hefei, China. The experimental results on the datasets show that our method can achieve excellent extraction and clustering effect, and the fitted lines can reach a high position accuracy with an error of less than 10 cm.

Keywords: curve fitting, lane-level road map, line recognition, multi-thresholding, two-stage clustering

Procedia PDF Downloads 116
413 The Relationship between General Self-Efficacy, Perfectionism and Trait Anxiety: A Study among Gifted Students

Authors: Marialena Kostouli, Georgia Tsoulfa

Abstract:

The aim of this study is to investigate the relationship between general self-efficacy, perfectionism, and gifted students’ trait anxiety. One hundred fifty three students, who were all selected and enrolled at the Center for Talented Youth (CTY) - Greece summer program, participated in the study. The sample consisted of 78 males (51%) and 75 females (49%), with a mean age of 14.96 years (SD = 1.16 years). Three self-report questionnaires were used for the purposes of the current study, the Frost Multidimensional Perfectionism scale, the State-Trait anxiety inventory and the General Self-Efficacy scale. The results revealed a significant correlation between trait anxiety, general self-efficacy and the four sub-scales of perfectionism (concern over mistakes and doubts about actions, excessive concern with parents’ expectations and evaluation, excessively high personal standards and concern with precision, order, and organization). It was also found that the female CTY students experience greater levels of trait anxiety compared to the male CTYers. Moreover, a multiple regression analysis was conducted in order to determine the possible predictors of gifted students’ trait anxiety. The analysis showed that general self-efficacy and the concern over mistakes and doubts about actions significantly predicted the trait anxiety of the gifted children that we examined. Avenues of further research and implications for the development of interventions to help gifted students promote their general self-efficacy, reduce their concern over their actions and develop strategies in order to cope with their anxiety are discussed.

Keywords: general self-efficacy, gifted students, perfectionism, trait anxiety

Procedia PDF Downloads 330
412 Analyzing the Performance of Machine Learning Models to Predict Alzheimer's Disease and its Stages Addressing Missing Value Problem

Authors: Carlos Theran, Yohn Parra Bautista, Victor Adankai, Richard Alo, Jimwi Liu, Clement G. Yedjou

Abstract:

Alzheimer's disease (AD) is a neurodegenerative disorder primarily characterized by deteriorating cognitive functions. AD has gained relevant attention in the last decade. An estimated 24 million people worldwide suffered from this disease by 2011. In 2016 an estimated 40 million were diagnosed with AD, and for 2050 is expected to reach 131 million people affected by AD. Therefore, detecting and confirming AD at its different stages is a priority for medical practices to provide adequate and accurate treatments. Recently, Machine Learning (ML) models have been used to study AD's stages handling missing values in multiclass, focusing on the delineation of Early Mild Cognitive Impairment (EMCI), Late Mild Cognitive Impairment (LMCI), and normal cognitive (CN). But, to our best knowledge, robust performance information of these models and the missing data analysis has not been presented in the literature. In this paper, we propose studying the performance of five different machine learning models for AD's stages multiclass prediction in terms of accuracy, precision, and F1-score. Also, the analysis of three imputation methods to handle the missing value problem is presented. A framework that integrates ML model for AD's stages multiclass prediction is proposed, performing an average accuracy of 84%.

Keywords: alzheimer's disease, missing value, machine learning, performance evaluation

Procedia PDF Downloads 217
411 Random Forest Classification for Population Segmentation

Authors: Regina Chua

Abstract:

To reduce the costs of re-fielding a large survey, a Random Forest classifier was applied to measure the accuracy of classifying individuals into their assigned segments with the fewest possible questions. Given a long survey, one needed to determine the most predictive ten or fewer questions that would accurately assign new individuals to custom segments. Furthermore, the solution needed to be quick in its classification and usable in non-Python environments. In this paper, a supervised Random Forest classifier was modeled on a dataset with 7,000 individuals, 60 questions, and 254 features. The Random Forest consisted of an iterative collection of individual decision trees that result in a predicted segment with robust precision and recall scores compared to a single tree. A random 70-30 stratified sampling for training the algorithm was used, and accuracy trade-offs at different depths for each segment were identified. Ultimately, the Random Forest classifier performed at 87% accuracy at a depth of 10 with 20 instead of 254 features and 10 instead of 60 questions. With an acceptable accuracy in prioritizing feature selection, new tools were developed for non-Python environments: a worksheet with a formulaic version of the algorithm and an embedded function to predict the segment of an individual in real-time. Random Forest was determined to be an optimal classification model by its feature selection, performance, processing speed, and flexible application in other environments.

Keywords: machine learning, supervised learning, data science, random forest, classification, prediction, predictive modeling

Procedia PDF Downloads 79
410 Subpixel Corner Detection for Monocular Camera Linear Model Research

Authors: Guorong Sui, Xingwei Jia, Fei Tong, Xiumin Gao

Abstract:

Camera calibration is a fundamental issue of high precision noncontact measurement. And it is necessary to analyze and study the reliability and application range of its linear model which is often used in the camera calibration. According to the imaging features of monocular cameras, a camera model which is based on the image pixel coordinates and three dimensional space coordinates is built. Using our own customized template, the image pixel coordinate is obtained by the subpixel corner detection method. Without considering the aberration of the optical system, the feature extraction and linearity analysis of the line segment in the template are performed. Moreover, the experiment is repeated 11 times by constantly varying the measuring distance. At last, the linearity of the camera is achieved by fitting 11 groups of data. The camera model measurement results show that the relative error does not exceed 1%, and the repeated measurement error is not more than 0.1 mm magnitude. Meanwhile, it is found that the model has some measurement differences in the different region and object distance. The experiment results show this linear model is simple and practical, and have good linearity within a certain object distance. These experiment results provide a powerful basis for establishment of the linear model of camera. These works will have potential value to the actual engineering measurement.

Keywords: camera linear model, geometric imaging relationship, image pixel coordinates, three dimensional space coordinates, sub-pixel corner detection

Procedia PDF Downloads 266
409 Thermoplastic-Intensive Battery Trays for Optimum Electric Vehicle Battery Pack Performance

Authors: Dinesh Munjurulimana, Anil Tiwari, Tingwen Li, Carlos Pereira, Sreekanth Pannala, John Waters

Abstract:

With the rapid transition to electric vehicles (EVs) across the globe, car manufacturers are in need of integrated and lightweight solutions for the battery packs of these vehicles. An integral part of a battery pack is the battery tray, which constitutes a significant portion of the pack’s overall weight. Based on the functional requirements, cost targets, and packaging space available, a range of materials –from metals, composites, and plastics– are often used to develop these battery trays. This paper considers the design and development of integrated thermoplastic-intensive battery trays, using the available packaging space from a representative EV battery pack. Presented as a proposed alternative are multiple concepts to integrate several connected systems such as cooling plates and underbody impact protection parts of a multi-piece incumbent battery pack. The resulting digital prototype was evaluated for several mechanical performance measures such as mechanical shock, drop, crush resistance, modal analysis, and torsional stiffness. The performance of this alternative design is then compared with the incumbent solution. In addition, insights are gleaned into how these novel approaches can be optimized to meet or exceed the performance of incumbent designs. Preliminary manufacturing feasibility of the optimal solution using injection molding and other commonly used manufacturing methods for thermoplastics is briefly explained. Then numerical and analytical evaluations are performed to show a representative Pareto front of cost vs. volume of the production parts. The proposed solution is observed to offer weight savings of up to 40% on a component level and part elimination of up to two systems in the battery pack of a typical battery EV while offering the potential to meet the required performance measures highlighted above. These conceptual solutions are also observed to potentially offer secondary benefits such as improved thermal and electrical isolations and be able to achieve complex geometrical features, thus demonstrating the ability to use the complete packaging space available in the vehicle platform considered. The detailed study presented in this paper serves as a valuable reference for researches across the globe working on the development of EV battery packs – especially those with an interest in the potential of employing alternate solutions as part of a mixed-material system to help capture untapped opportunities to optimize performance and meet critical application requirements.

Keywords: thermoplastics, lightweighting, part integration, electric vehicle battery packs

Procedia PDF Downloads 193
408 Development of the Analysis and Pretreatment of Brown HT in Foods

Authors: Hee-Jae Suh, Mi-Na Hong, Min-Ji Kim, Yeon-Seong Jeong, Ok-Hwan Lee, Jae-Wook Shin, Hyang-Sook Chun, Chan Lee

Abstract:

Brown HT is a bis-azo dye which is permitted in EU as a food colorant. So far, many studies have focused on HPLC using diode array detection (DAD) analysis for detection of this food colorant with different columns and mobile phases. Even though these methods make it possible to detect Brown HT, low recovery, reproducibility, and linearity are still the major limitations for the application in foods. The purpose of this study was to compare various methods for the analysis of Brown HT and to develop an improved analytical methods including pretreatment. Among tested analysis methods, best resolution of Brown HT was observed when the following solvent was applied as a eluent; solvent A of mobile phase was 0.575g NH4H2PO4, and 0.7g Na2HPO4 in 500mL water added with 500mL methanol. The pH was adjusted using phosphoric acid to pH 6.9 and solvent B was methanol. Major peak for Brown HT appeared at the end of separation, 13.4min after injection. This method exhibited relatively high recovery and reproducibility compared with other methods. LOD (0.284 ppm), LOQ (0.861 ppm), resolution (6.143), and selectivity (1.3) of this method were better than those of ammonium acetate solution method which was most frequently used. Precision and accuracy were verified through inter-day test and intra-day test. Various methods for sample pretreatments were developed for different foods and relatively high recovery over 80% was observed in all case. This method exhibited high resolution and reproducibility of Brown HT compared with other previously reported official methods from FSA and, EU regulation.

Keywords: analytic method, Brown HT, food colorants, pretreatment method

Procedia PDF Downloads 462
407 Using Deep Learning Real-Time Object Detection Convolution Neural Networks for Fast Fruit Recognition in the Tree

Authors: K. Bresilla, L. Manfrini, B. Morandi, A. Boini, G. Perulli, L. C. Grappadelli

Abstract:

Image/video processing for fruit in the tree using hard-coded feature extraction algorithms have shown high accuracy during recent years. While accurate, these approaches even with high-end hardware are computationally intensive and too slow for real-time systems. This paper details the use of deep convolution neural networks (CNNs), specifically an algorithm (YOLO - You Only Look Once) with 24+2 convolution layers. Using deep-learning techniques eliminated the need for hard-code specific features for specific fruit shapes, color and/or other attributes. This CNN is trained on more than 5000 images of apple and pear fruits on 960 cores GPU (Graphical Processing Unit). Testing set showed an accuracy of 90%. After this, trained data were transferred to an embedded device (Raspberry Pi gen.3) with camera for more portability. Based on correlation between number of visible fruits or detected fruits on one frame and the real number of fruits on one tree, a model was created to accommodate this error rate. Speed of processing and detection of the whole platform was higher than 40 frames per second. This speed is fast enough for any grasping/harvesting robotic arm or other real-time applications.

Keywords: artificial intelligence, computer vision, deep learning, fruit recognition, harvesting robot, precision agriculture

Procedia PDF Downloads 397
406 Hard Disk Failure Predictions in Supercomputing System Based on CNN-LSTM and Oversampling Technique

Authors: Yingkun Huang, Li Guo, Zekang Lan, Kai Tian

Abstract:

Hard disk drives (HDD) failure of the exascale supercomputing system may lead to service interruption and invalidate previous calculations, and it will cause permanent data loss. Therefore, initiating corrective actions before hard drive failures materialize is critical to the continued operation of jobs. In this paper, a highly accurate analysis model based on CNN-LSTM and oversampling technique was proposed, which can correctly predict the necessity of a disk replacement even ten days in advance. Generally, the learning-based method performs poorly on a training dataset with long-tail distribution, especially fault prediction is a very classic situation as the scarcity of failure data. To overcome the puzzle, a new oversampling was employed to augment the data, and then, an improved CNN-LSTM with the shortcut was built to learn more effective features. The shortcut transmits the results of the previous layer of CNN and is used as the input of the LSTM model after weighted fusion with the output of the next layer. Finally, a detailed, empirical comparison of 6 prediction methods is presented and discussed on a public dataset for evaluation. The experiments indicate that the proposed method predicts disk failure with 0.91 Precision, 0.91 Recall, 0.91 F-measure, and 0.90 MCC for 10 days prediction horizon. Thus, the proposed algorithm is an efficient algorithm for predicting HDD failure in supercomputing.

Keywords: HDD replacement, failure, CNN-LSTM, oversampling, prediction

Procedia PDF Downloads 64
405 Development of an Advanced Power Ultrasonic-Assisted Drilling System

Authors: M. A. Moghaddas, M. Short, N. Wiley, A. Y. Yi, K. F. Graff

Abstract:

The application of ultrasonic vibrations to machining processes has a long history, ranging from slurry-based systems able to drill brittle materials, to more recent developments involving low power ultrasonics for high precision machining, with many of these at the research and laboratory stages. The focus of this development is the application of high levels of ultrasonic power (1,000’s of watts) to standard, heavy duty machine tools – drilling being the immediate focus, with developments in milling in progress – with the objective of dramatically increasing system productivity through faster feed rates, this benefit arising from the thrust force reductions obtained by power ultrasonic vibrations. The presentation will describe development of an advanced drilling system based on a special, acoustically designed, rugged drill module capable of functioning under heavy duty production conditions, and making use of standard tool holder means, and able to obtain thrust force reductions while maintaining or improving surface finish and drilling accuracy. The characterization of the system performance will be described, and results obtained in drilling several materials (Aluminum, Stainless steel, Titanium) presented.

Keywords: dimensional accuracy, machine tool, productivity, surface roughness, thrust force, ultrasonic vibrations, ultrasonic-assisted drilling

Procedia PDF Downloads 263
404 ParkedGuard: An Efficient and Accurate Parked Domain Detection System Using Graphical Locality Analysis and Coarse-To-Fine Strategy

Authors: Chia-Min Lai, Wan-Ching Lin, Hahn-Ming Lee, Ching-Hao Mao

Abstract:

As world wild internet has non-stop developments, making profit by lending registered domain names emerges as a new business in recent years. Unfortunately, the larger the market scale of domain lending service becomes, the riskier that there exist malicious behaviors or malwares hiding behind parked domains will be. Also, previous work for differentiating parked domain suffers two main defects: 1) too much data-collecting effort and CPU latency needed for features engineering and 2) ineffectiveness when detecting parked domains containing external links that are usually abused by hackers, e.g., drive-by download attack. Aiming for alleviating above defects without sacrificing practical usability, this paper proposes ParkedGuard as an efficient and accurate parked domain detector. Several scripting behavioral features were analyzed, while those with special statistical significance are adopted in ParkedGuard to make feature engineering much more cost-efficient. On the other hand, finding memberships between external links and parked domains was modeled as a graph mining problem, and a coarse-to-fine strategy was elaborately designed by leverage the graphical locality such that ParkedGuard outperforms the state-of-the-art in terms of both recall and precision rates.

Keywords: coarse-to-fine strategy, domain parking service, graphical locality analysis, parked domain

Procedia PDF Downloads 392
403 From Wave-Powered Propulsion to Flight with Membrane Wings: Insights Powered by High-Fidelity Immersed Boundary Methods based FSI Simulations

Authors: Rajat Mittal, Jung Hee Seo, Jacob Turner, Harshal Raut

Abstract:

The perpetual advancement in computational capabilities, coupled with the continuous evolution of software tools and numerical algorithms, is creating novel avenues for research, exploration, and application at the nexus of computational fluid and structural mechanics. Fish leverage their remarkably flexible bodies and fins to harness energy from vortices, propelling themselves with an elegance and efficiency that captivates engineers. Bats fly with unparalleled agility and speed by using their flexible membrane wings. Wave-assisted propulsion (WAP) systems, utilizing elastically mounted hydrofoils, convert wave energy into thrust. Each of these problems involves a complex and elegant interplay between fluid dynamics and structural mechanics. Historically, investigations into such phenomena were constrained by available tools, but modern computational advancements now facilitate exploration of these multi-physics challenges with an unprecedented level of fidelity, precision, and realism. In this work, the author will discuss projects that harness the capabilities of high-fidelity sharp-interface immersed boundary methods to address a spectrum of engineering and biological challenges involving fluid-structure interaction.

Keywords: immersed boundary methods, CFD, bioflight, fluid structure interaction

Procedia PDF Downloads 47
402 Parameter Estimation for the Mixture of Generalized Gamma Model

Authors: Wikanda Phaphan

Abstract:

Mixture generalized gamma distribution is a combination of two distributions: generalized gamma distribution and length biased generalized gamma distribution. These two distributions were presented by Suksaengrakcharoen and Bodhisuwan in 2014. The findings showed that probability density function (pdf) had fairly complexities, so it made problems in estimating parameters. The problem occurred in parameter estimation was that we were unable to calculate estimators in the form of critical expression. Thus, we will use numerical estimation to find the estimators. In this study, we presented a new method of the parameter estimation by using the expectation – maximization algorithm (EM), the conjugate gradient method, and the quasi-Newton method. The data was generated by acceptance-rejection method which is used for estimating α, β, λ and p. λ is the scale parameter, p is the weight parameter, α and β are the shape parameters. We will use Monte Carlo technique to find the estimator's performance. Determining the size of sample equals 10, 30, 100; the simulations were repeated 20 times in each case. We evaluated the effectiveness of the estimators which was introduced by considering values of the mean squared errors and the bias. The findings revealed that the EM-algorithm had proximity to the actual values determined. Also, the maximum likelihood estimators via the conjugate gradient and the quasi-Newton method are less precision than the maximum likelihood estimators via the EM-algorithm.

Keywords: conjugate gradient method, quasi-Newton method, EM-algorithm, generalized gamma distribution, length biased generalized gamma distribution, maximum likelihood method

Procedia PDF Downloads 206
401 A Leaf-Patchable Reflectance Meter for in situ Continuous Monitoring of Chlorophyll Content

Authors: Kaiyi Zhang, Wenlong Li, Haicheng Li, Yifei Luo, Zheng Li, Xiaoshi Wang, Xiaodong Chen

Abstract:

Plant wearable sensors facilitate the real-time monitoring of plant physiological status. In situ monitoring of the plant chlorophyll content over days could provide valuable information on the photosynthetic capacity, nitrogen content, and general plant health. However, it cannot be achieved by current chlorophyll measuring methods. Here, a miniaturized and plant-wearable chlorophyll meter was developed for rapid, non-destructive, in situ, and long-term chlorophyll monitoring. This reflectance-based chlorophyll sensor with 1.5 mm thickness and 0.2 g weight (1000 times lighter than the commercial chlorophyll meter), includes a light emitting diode (LED) and two symmetric photodetectors (PDs) on a flexible substrate and is patched onto the leaf upper epidermis with a conformal light guiding layer. A chlorophyll content index (CCI) calculated based on this sensor shows a better linear relationship with the leaf chlorophyll content (r² > 0.9) than the traditional chlorophyll meter. This meter can wirelessly communicate with a smartphone to monitor the leaf chlorophyll change under various stresses and indicate the unhealthy status of plants for long-term application of plants under various stresses earlier than chlorophyll meter and naked-eye observation. This wearable chlorophyll sensing patch is promising in smart and precision agriculture.

Keywords: plant wearable sensors, reflectance-based measurements, chlorophyll content monitoring, smart agriculture

Procedia PDF Downloads 90
400 Additive Manufacturing with Ceramic Filler

Authors: Irsa Wolfram, Boruch Lorenz

Abstract:

Innovative solutions with additive manufacturing applying material extrusion for functional parts necessitate innovative filaments with persistent quality. Uniform homogeneity and a consistent dispersion of particles embedded in filaments generally require multiple cycles of extrusion or well-prepared primal matter by injection molding, kneader machines, or mixing equipment. These technologies commit to dedicated equipment that is rarely at the disposal in production laboratories unfamiliar with research in polymer materials. This stands in contrast to laboratories that investigate complex material topics and technology science to leverage the potential of 3-D printing. Consequently, scientific studies in labs are often constrained to compositions and concentrations of fillersofferedfrom the market. Therefore, we introduce a prototypal laboratory methodology scalable to tailoredprimal matter for extruding ceramic composite filaments with fused filament fabrication (FFF) technology. - A desktop single-screw extruder serves as a core device for the experiments. Custom-made filaments encapsulate the ceramic fillers and serve with polylactide (PLA), which is a thermoplastic polyester, as primal matter and is processed in the melting area of the extruder, preserving the defined concentration of the fillers. Validated results demonstrate that this approach enables continuously produced and uniform composite filaments with consistent homogeneity. Itis 3-D printable with controllable dimensions, which is a prerequisite for any scalable application. Additionally, digital microscopy confirms the steady dispersion of the ceramic particles in the composite filament. - This permits a 2D reconstruction of the planar distribution of the embedded ceramic particles in the PLA matrices. The innovation of the introduced method lies in the smart simplicity of preparing the composite primal matter. It circumvents the inconvenience of numerous extrusion operations and expensive laboratory equipment. Nevertheless, it deliversconsistent filaments of controlled, predictable, and reproducible filler concentration, which is the prerequisite for any industrial application. The introduced prototypal laboratory methodology seems capable for other polymer matrices and suitable to further utilitarian particle types beyond and above ceramic fillers. This inaugurates a roadmap for supplementary laboratory development of peculiar composite filaments, providing value for industries and societies. This low-threshold entry of sophisticated preparation of composite filaments - enabling businesses to create their own dedicated filaments - will support the mutual efforts for establishing 3D printing to new functional devices.

Keywords: additive manufacturing, ceramic composites, complex filament, industrial application

Procedia PDF Downloads 91
399 Modeling the Time Dependent Biodistribution of a 177Lu Labeled Somatostatin Analogues for Targeted Radiotherapy of Neuroendocrine Tumors Using Compartmental Analysis

Authors: Mahdieh Jajroudi

Abstract:

Developing a pharmacokinetic model for the neuroendocrine tumors therapy agent 177Lu-DOTATATE in nude mice bearing AR42J rat pancreatic tumor to investigate and evaluate the behavior of the complex was the main purpose of this study. The utilization of compartmental analysis permits the mathematical differencing of tissues and organs to become acquainted with the concentration of activity in each fraction of interest. Biodistribution studies are onerous and troublesome to perform in humans, but such data can be obtained facilely in rodents. A physiologically based pharmacokinetic model for scaling up activity concentration in particular organs versus time was developed. The mathematical model exerts physiological parameters including organ volumes, blood flow rates, and vascular permabilities; the compartments (organs) are connected anatomically. This allows the use of scale-up techniques to forecast new complex distribution in humans' each organ. The concentration of the radiopharmaceutical in various organs was measured at different times. The temporal behavior of biodistribution of 177Lu labeled somatostatin analogues was modeled and drawn as function of time. Conclusion: The variation of pharmaceutical concentration in all organs is characterized with summation of six to nine exponential terms and it approximates our experimental data with precision better than 1%.

Keywords: biodistribution modeling, compartmental analysis, 177Lu labeled somatostatin analogues, neuroendocrine tumors

Procedia PDF Downloads 346
398 D3Advert: Data-Driven Decision Making for Ad Personalization through Personality Analysis Using BiLSTM Network

Authors: Sandesh Achar

Abstract:

Personalized advertising holds greater potential for higher conversion rates compared to generic advertisements. However, its widespread application in the retail industry faces challenges due to complex implementation processes. These complexities impede the swift adoption of personalized advertisement on a large scale. Personalized advertisement, being a data-driven approach, necessitates consumer-related data, adding to its complexity. This paper introduces an innovative data-driven decision-making framework, D3Advert, which personalizes advertisements by analyzing personalities using a BiLSTM network. The framework utilizes the Myers–Briggs Type Indicator (MBTI) dataset for development. The employed BiLSTM network, specifically designed and optimized for D3Advert, classifies user personalities into one of the sixteen MBTI categories based on their social media posts. The classification accuracy is 86.42%, with precision, recall, and F1-Score values of 85.11%, 84.14%, and 83.89%, respectively. The D3Advert framework personalizes advertisements based on these personality classifications. Experimental implementation and performance analysis of D3Advert demonstrate a 40% improvement in impressions. D3Advert’s innovative and straightforward approach has the potential to transform personalized advertising and foster widespread personalized advertisement adoption in marketing.

Keywords: personalized advertisement, deep Learning, MBTI dataset, BiLSTM network, NLP.

Procedia PDF Downloads 24
397 Comparative Analysis of Classification Methods in Determining Non-Active Student Characteristics in Indonesia Open University

Authors: Dewi Juliah Ratnaningsih, Imas Sukaesih Sitanggang

Abstract:

Classification is one of data mining techniques that aims to discover a model from training data that distinguishes records into the appropriate category or class. Data mining classification methods can be applied in education, for example, to determine the classification of non-active students in Indonesia Open University. This paper presents a comparison of three methods of classification: Naïve Bayes, Bagging, and C.45. The criteria used to evaluate the performance of three methods of classification are stratified cross-validation, confusion matrix, the value of the area under the ROC Curve (AUC), Recall, Precision, and F-measure. The data used for this paper are from the non-active Indonesia Open University students in registration period of 2004.1 to 2012.2. Target analysis requires that non-active students were divided into 3 groups: C1, C2, and C3. Data analyzed are as many as 4173 students. Results of the study show: (1) Bagging method gave a high degree of classification accuracy than Naïve Bayes and C.45, (2) the Bagging classification accuracy rate is 82.99 %, while the Naïve Bayes and C.45 are 80.04 % and 82.74 % respectively, (3) the result of Bagging classification tree method has a large number of nodes, so it is quite difficult in decision making, (4) classification of non-active Indonesia Open University student characteristics uses algorithms C.45, (5) based on the algorithm C.45, there are 5 interesting rules which can describe the characteristics of non-active Indonesia Open University students.

Keywords: comparative analysis, data mining, clasiffication, Bagging, Naïve Bayes, C.45, non-active students, Indonesia Open University

Procedia PDF Downloads 302
396 Design of a Cooperative Neural Network, Particle Swarm Optimization (PSO) and Fuzzy Based Tracking Control for a Tilt Rotor Unmanned Aerial Vehicle

Authors: Mostafa Mjahed

Abstract:

Tilt Rotor UAVs (Unmanned Aerial Vehicles) are naturally unstable and difficult to maneuver. The purpose of this paper is to design controllers for the stabilization and trajectory tracking of this type of UAV. To this end, artificial intelligence methods have been exploited. First, the dynamics of this UAV was modeled using the Lagrange-Euler method. The conventional method based on Proportional, Integral and Derivative (PID) control was applied by decoupling the different flight modes. To improve stability and trajectory tracking of the Tilt Rotor, the fuzzy approach and the technique of multilayer neural networks (NN) has been used. Thus, Fuzzy Proportional Integral and Derivative (FPID) and Neural Network-based Proportional Integral and Derivative controllers (NNPID) have been developed. The meta-heuristic approach based on Particle Swarm Optimization (PSO) method allowed adjusting the setting parameters of NNPID controller, giving us an improved NNPID-PSO controller. Simulation results under the Matlab environment show the efficiency of the approaches adopted. Besides, the Tilt Rotor UAV has become stable and follows different types of trajectories with acceptable precision. The Fuzzy, NN and NN-PSO-based approaches demonstrated their robustness because the presence of the disturbances did not alter the stability or the trajectory tracking of the Tilt Rotor UAV.

Keywords: neural network, fuzzy logic, PSO, PID, trajectory tracking, tilt-rotor UAV

Procedia PDF Downloads 105
395 BodeACD: Buffer Overflow Vulnerabilities Detecting Based on Abstract Syntax Tree, Control Flow Graph, and Data Dependency Graph

Authors: Xinghang Lv, Tao Peng, Jia Chen, Junping Liu, Xinrong Hu, Ruhan He, Minghua Jiang, Wenli Cao

Abstract:

As one of the most dangerous vulnerabilities, effective detection of buffer overflow vulnerabilities is extremely necessary. Traditional detection methods are not accurate enough and consume more resources to meet complex and enormous code environment at present. In order to resolve the above problems, we propose the method for Buffer overflow detection based on Abstract syntax tree, Control flow graph, and Data dependency graph (BodeACD) in C/C++ programs with source code. Firstly, BodeACD constructs the function samples of buffer overflow that are available on Github, then represents them as code representation sequences, which fuse control flow, data dependency, and syntax structure of source code to reduce information loss during code representation. Finally, BodeACD learns vulnerability patterns for vulnerability detection through deep learning. The results of the experiments show that BodeACD has increased the precision and recall by 6.3% and 8.5% respectively compared with the latest methods, which can effectively improve vulnerability detection and reduce False-positive rate and False-negative rate.

Keywords: vulnerability detection, abstract syntax tree, control flow graph, data dependency graph, code representation, deep learning

Procedia PDF Downloads 156
394 Video Object Segmentation for Automatic Image Annotation of Ethernet Connectors with Environment Mapping and 3D Projection

Authors: Marrone Silverio Melo Dantas Pedro Henrique Dreyer, Gabriel Fonseca Reis de Souza, Daniel Bezerra, Ricardo Souza, Silvia Lins, Judith Kelner, Djamel Fawzi Hadj Sadok

Abstract:

The creation of a dataset is time-consuming and often discourages researchers from pursuing their goals. To overcome this problem, we present and discuss two solutions adopted for the automation of this process. Both optimize valuable user time and resources and support video object segmentation with object tracking and 3D projection. In our scenario, we acquire images from a moving robotic arm and, for each approach, generate distinct annotated datasets. We evaluated the precision of the annotations by comparing these with a manually annotated dataset, as well as the efficiency in the context of detection and classification problems. For detection support, we used YOLO and obtained for the projection dataset an F1-Score, accuracy, and mAP values of 0.846, 0.924, and 0.875, respectively. Concerning the tracking dataset, we achieved an F1-Score of 0.861, an accuracy of 0.932, whereas mAP reached 0.894. In order to evaluate the quality of the annotated images used for classification problems, we employed deep learning architectures. We adopted metrics accuracy and F1-Score, for VGG, DenseNet, MobileNet, Inception, and ResNet. The VGG architecture outperformed the others for both projection and tracking datasets. It reached an accuracy and F1-score of 0.997 and 0.993, respectively. Similarly, for the tracking dataset, it achieved an accuracy of 0.991 and an F1-Score of 0.981.

Keywords: RJ45, automatic annotation, object tracking, 3D projection

Procedia PDF Downloads 145
393 Investigation on 3D Printing of Calcium silicate Bioceramic Slurry for Bone Tissue Engineering

Authors: Amin Jabbari

Abstract:

The state of the art in major 3D printing technologies, such as powder-based and slurry based, has led researchers to investigate the ability to fabricate bone scaffolds for bone tissue engineering using biomaterials. In addition, 3D printing technology can simulate mechanical and biological surface properties and print with high precision complex internal and external structures that match their functional properties. Polymer matrix composites reinforced with particulate bioceramics, hydrogels reinforced with particulate bioceramics, polymers coated with bioceramics, and non-porous bioceramics are among the materials that can be investigated for bone scaffold printing. Furthermore, it was shown that the introduction of high-density micropores into the sparingly dissolvable CSiMg10 and dissolvable CSiMg4 shell layer inevitably leads to a nearly 30% reduction in compressive strength, but such micropores can easily influence the ion release behavior of the scaffolds. Also, biocompatibility tests such as cytotoxicity, hemocompatibility and genotoxicity were tested on printed parts. The printed part was tested in vitro, and after 24-26 h for cytotoxicity, and 4h for hemocompatibility test, the CSiMg4@CSiMg10-p scaffolds were found to have significantly higher osteogenic capability than the other scaffolds of implantation. Overall, these experimental studies demonstrate that 3D printed, additively-manufactured bioceramic calcium (Ca)-silicate scaffolds with appropriate pore dimensions are promising to guide new bone ingrowth.

Keywords: AM, 3D printed implants, bioceramic, tissue engineering

Procedia PDF Downloads 60