Search results for: gradient boosting machine
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3666

Search results for: gradient boosting machine

1656 Quantification of NDVI Variation within the Major Plant Formations in Nunavik

Authors: Anna Gaspard, Stéphane Boudreau, Martin Simard

Abstract:

Altered temperature and precipitation regimes associated with climate change generally result in improved conditions for plant growth. For Arctic and sub-Arctic ecosystems, this new climatic context favours an increase in primary productivity, a phenomenon often referred to as "greening". The development of an erect shrub cover has been identified as the main driver of Arctic greening. Although this phenomenon has been widely documented at the circumpolar scale, little information is available at the scale of plant communities, the basic unit of the Arctic, and sub-Arctic landscape mosaic. The objective of this study is to quantify the variation of NDVI within the different plant communities of Nunavik, which will allow us to identify the plant formations that contribute the most to the increase in productivity observed in this territory. To do so, the variation of NDVI extracted from Landsat images for the period 1984 to 2020 was quantified. From the Landsat scenes, annual summer NDVI mosaics with a resolution of 30 m were generated. The ecological mapping of Northern Quebec vegetation was then overlaid on the time series of NDVI maps to calculate the average NDVI per vegetation polygon for each year. Our results show that NDVI increases are more important for the bioclimatic domains of forest tundra and erect shrub tundra, and shrubby formations. Surface deposits, variations in mean annual temperature, and variations in winter precipitation are involved in NDVI variations. This study has thus allowed us to quantify changes in Nunavik's vegetation communities, using fine spatial resolution satellite imagery data.

Keywords: climate change, latitudinal gradient, plant communities, productivity

Procedia PDF Downloads 180
1655 Towards Creative Movie Title Generation Using Deep Neural Models

Authors: Simon Espigolé, Igor Shalyminov, Helen Hastie

Abstract:

Deep machine learning techniques including deep neural networks (DNN) have been used to model language and dialogue for conversational agents to perform tasks, such as giving technical support and also for general chit-chat. They have been shown to be capable of generating long, diverse and coherent sentences in end-to-end dialogue systems and natural language generation. However, these systems tend to imitate the training data and will only generate the concepts and language within the scope of what they have been trained on. This work explores how deep neural networks can be used in a task that would normally require human creativity, whereby the human would read the movie description and/or watch the movie and come up with a compelling, interesting movie title. This task differs from simple summarization in that the movie title may not necessarily be derivable from the content or semantics of the movie description. Here, we train a type of DNN called a sequence-to-sequence model (seq2seq) that takes as input a short textual movie description and some information on e.g. genre of the movie. It then learns to output a movie title. The idea is that the DNN will learn certain techniques and approaches that the human movie titler may deploy that may not be immediately obvious to the human-eye. To give an example of a generated movie title, for the movie synopsis: ‘A hitman concludes his legacy with one more job, only to discover he may be the one getting hit.’; the original, true title is ‘The Driver’ and the one generated by the model is ‘The Masquerade’. A human evaluation was conducted where the DNN output was compared to the true human-generated title, as well as a number of baselines, on three 5-point Likert scales: ‘creativity’, ‘naturalness’ and ‘suitability’. Subjects were also asked which of the two systems they preferred. The scores of the DNN model were comparable to the scores of the human-generated movie title, with means m=3.11, m=3.12, respectively. There is room for improvement in these models as they were rated significantly less ‘natural’ and ‘suitable’ when compared to the human title. In addition, the human-generated title was preferred overall 58% of the time when pitted against the DNN model. These results, however, are encouraging given the comparison with a highly-considered, well-crafted human-generated movie title. Movie titles go through a rigorous process of assessment by experts and focus groups, who have watched the movie. This process is in place due to the large amount of money at stake and the importance of creating an effective title that captures the audiences’ attention. Our work shows progress towards automating this process, which in turn may lead to a better understanding of creativity itself.

Keywords: creativity, deep machine learning, natural language generation, movies

Procedia PDF Downloads 326
1654 Double Clustering as an Unsupervised Approach for Order Picking of Distributed Warehouses

Authors: Hsin-Yi Huang, Ming-Sheng Liu, Jiun-Yan Shiau

Abstract:

Planning the order picking lists of warehouses to achieve when the costs associated with logistics on the operational performance is a significant challenge. In e-commerce era, this task is especially important productive processes are high. Nowadays, many order planning techniques employ supervised machine learning algorithms. However, the definition of which features should be processed by such algorithms is not a simple task, being crucial to the proposed technique’s success. Against this background, we consider whether unsupervised algorithms can enhance the planning of order-picking lists. A Zone2 picking approach, which is based on using clustering algorithms twice, is developed. A simplified example is given to demonstrate the merit of our approach.

Keywords: order picking, warehouse, clustering, unsupervised learning

Procedia PDF Downloads 157
1653 Removal of Na₂SO₄ by Electro-Confinement on Nanoporous Carbon Membrane

Authors: Jing Ma, Guotong Qin

Abstract:

We reported electro-confinement desalination (ECMD), a desalination method combining electric field effects and confinement effects using nanoporous carbon membranes as electrode. A carbon membrane with average pore size of 8.3 nm was prepared by organic sol-gel method. The precursor of support was prepared by curing porous phenol resin tube. Resorcinol-formaldehyde sol was coated on porous tubular resin support. The membrane was obtained by carbonisation of coated support. A well-combined top layer with the thickness of 35 μm was supported by macroporous support. Measurements of molecular weight cut-off using polyethylene glycol showed the average pore size of 8.3 nm. High salt rejection can be achieved because the water molecules need not overcome high energy barriers in confined space, while huge inherent dehydration energy was required for hydrated ions to enter the nanochannels. Additionally, carbon membrane with additional electric field can be used as an integrated membrane electrode combining the effects of confinement and electric potential gradient. Such membrane electrode can repel co-ions and attract counter-ions using pressure as the driving force for mass transport. When the carbon membrane was set as cathode, the rejection of SO₄²⁻ was 94.89%, while the removal of Na⁺ was less than 20%. We set carbon membrane as anode chamber to treat the effluent water from the cathode chamber. The rejection of SO₄²⁻ and Na⁺ reached to 100% and 88.86%, respectively. ECMD will be a promising energy efficient method for salt rejection.

Keywords: nanoporous carbon membrane, confined effect, electric field, desalination, membrane reactor

Procedia PDF Downloads 124
1652 Gesture-Controlled Interface Using Computer Vision and Python

Authors: Vedant Vardhan Rathour, Anant Agrawal

Abstract:

The project aims to provide a touchless, intuitive interface for human-computer interaction, enabling users to control their computer using hand gestures and voice commands. The system leverages advanced computer vision techniques using the MediaPipe framework and OpenCV to detect and interpret real time hand gestures, transforming them into mouse actions such as clicking, dragging, and scrolling. Additionally, the integration of a voice assistant powered by the Speech Recognition library allows for seamless execution of tasks like web searches, location navigation and gesture control on the system through voice commands.

Keywords: gesture recognition, hand tracking, machine learning, convolutional neural networks

Procedia PDF Downloads 7
1651 Asynchronous Sequential Machines with Fault Detectors

Authors: Seong Woo Kwak, Jung-Min Yang

Abstract:

A strategy of fault diagnosis and tolerance for asynchronous sequential machines is discussed in this paper. With no synchronizing clock, it is difficult to diagnose an occurrence of permanent or stuck-in faults in the operation of asynchronous machines. In this paper, we present a fault detector comprised of a timer and a set of static functions to determine the occurrence of faults. In order to realize immediate fault tolerance, corrective control theory is applied to designing a dynamic feedback controller. Existence conditions for an appropriate controller and its construction algorithm are presented in terms of reachability of the machine and the feature of fault occurrences.

Keywords: asynchronous sequential machines, corrective control, fault diagnosis and tolerance, fault detector

Procedia PDF Downloads 347
1650 Predicting Personality and Psychological Distress Using Natural Language Processing

Authors: Jihee Jang, Seowon Yoon, Gaeun Son, Minjung Kang, Joon Yeon Choeh, Kee-Hong Choi

Abstract:

Background: Self-report multiple choice questionnaires have been widely utilized to quantitatively measure one’s personality and psychological constructs. Despite several strengths (e.g., brevity and utility), self-report multiple-choice questionnaires have considerable limitations in nature. With the rise of machine learning (ML) and Natural language processing (NLP), researchers in the field of psychology are widely adopting NLP to assess psychological constructs to predict human behaviors. However, there is a lack of connections between the work being performed in computer science and that psychology due to small data sets and unvalidated modeling practices. Aims: The current article introduces the study method and procedure of phase II, which includes the interview questions for the five-factor model (FFM) of personality developed in phase I. This study aims to develop the interview (semi-structured) and open-ended questions for the FFM-based personality assessments, specifically designed with experts in the field of clinical and personality psychology (phase 1), and to collect the personality-related text data using the interview questions and self-report measures on personality and psychological distress (phase 2). The purpose of the study includes examining the relationship between natural language data obtained from the interview questions, measuring the FFM personality constructs, and psychological distress to demonstrate the validity of the natural language-based personality prediction. Methods: The phase I (pilot) study was conducted on fifty-nine native Korean adults to acquire the personality-related text data from the interview (semi-structured) and open-ended questions based on the FFM of personality. The interview questions were revised and finalized with the feedback from the external expert committee, consisting of personality and clinical psychologists. Based on the established interview questions, a total of 425 Korean adults were recruited using a convenience sampling method via an online survey. The text data collected from interviews were analyzed using natural language processing. The results of the online survey, including demographic data, depression, anxiety, and personality inventories, were analyzed together in the model to predict individuals’ FFM of personality and the level of psychological distress (phase 2).

Keywords: personality prediction, psychological distress prediction, natural language processing, machine learning, the five-factor model of personality

Procedia PDF Downloads 77
1649 Effect of Depth on Texture Features of Ultrasound Images

Authors: M. A. Alqahtani, D. P. Coleman, N. D. Pugh, L. D. M. Nokes

Abstract:

In diagnostic ultrasound, the echo graphic B-scan texture is an important area of investigation since it can be analyzed to characterize the histological state of internal tissues. An important factor requiring consideration when evaluating ultrasonic tissue texture is the depth. The effect of attenuation with depth of ultrasound, the size of the region of interest, gain, and dynamic range are important variables to consider as they can influence the analysis of texture features. These sources of variability have to be considered carefully when evaluating image texture as different settings might influence the resultant image. The aim of this study is to investigate the effect of depth on the texture features in-vivo using a 3D ultrasound probe. The left leg medial head of the gastrocnemius muscle of 10 healthy subjects were scanned. Two regions A and B were defined at different depth within the gastrocnemius muscle boundary. The size of both ROI’s was 280*20 pixels and the distance between region A and B was kept constant at 5 mm. Texture parameters include gray level, variance, skewness, kurtosis, co-occurrence matrix; run length matrix, gradient, autoregressive (AR) model and wavelet transform were extracted from the images. The paired t –test was used to test the depth effect for the normally distributed data and the Wilcoxon–Mann-Whitney test was used for the non-normally distributed data. The gray level, variance, and run length matrix were significantly lowered when the depth increased. The other texture parameters showed similar values at different depth. All the texture parameters showed no significant difference between depths A and B (p > 0.05) except for gray level, variance and run length matrix (p < 0.05). This indicates that gray level, variance, and run length matrix are depth dependent.

Keywords: ultrasound image, texture parameters, computational biology, biomedical engineering

Procedia PDF Downloads 293
1648 Two-Stage Flowshop Scheduling with Unsystematic Breakdowns

Authors: Fawaz Abdulmalek

Abstract:

The two-stage flowshop assembly scheduling problem is considered in this paper. There are more than one parallel machines at stage one and an assembly machine at stage two. The jobs will be processed into the flowshop based on Johnson rule and two extensions of Johnson rule. A simulation model of the two-stage flowshop is constructed where both machines at stage one are subject to random failures. Three simulation experiments will be conducted to test the effect of the three job ranking rules on the makespan. Johnson Largest heuristic outperformed both Johnson rule and Johnson Smallest heuristic for two performed experiments for all scenarios where each experiments having five scenarios.

Keywords: flowshop scheduling, random failures, johnson rule, simulation

Procedia PDF Downloads 337
1647 The Outcome of Using Machine Learning in Medical Imaging

Authors: Adel Edwar Waheeb Louka

Abstract:

Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.

Keywords: artificial intelligence, convolutional neural networks, deeplearning, image processing, machine learningSarapin, intraarticular, chronic knee pain, osteoarthritisFNS, trauma, hip, neck femur fracture, minimally invasive surgery

Procedia PDF Downloads 72
1646 Heat Sink Optimization for a High Power Wearable Thermoelectric Module

Authors: Zohreh Soleimani, Sally Salome Shahzad, Stamatis Zoras

Abstract:

As a result of current energy and environmental issues, the human body is known as one of the promising candidate for converting wasted heat to electricity (Seebeck effect). Thermoelectric generator (TEG) is one of the most prevalent means of harvesting body heat and converting that to eco-friendly electrical power. However, the uneven distribution of the body heat and its curvature geometry restrict harvesting adequate amount of energy. To perfectly transform the heat radiated by the body into power, the most direct solution is conforming the thermoelectric generators (TEG) with the arbitrary surface of the body and increase the temperature difference across the thermoelectric legs. Due to this, a computational survey through COMSOL Multiphysics is presented in this paper with the main focus on the impact of integrating a flexible wearable TEG with a corrugated shaped heat sink on the module power output. To eliminate external parameters (temperature, air flow, humidity), the simulations are conducted within indoor thermal level and when the wearer is stationary. The full thermoelectric characterization of the proposed TEG fabricated by a wavy shape heat sink has been computed leading to a maximum power output of 25µW/cm2 at a temperature gradient nearly 13°C. It is noteworthy that for the flexibility of the proposed TEG and heat sink, the applicability and efficiency of the module stay high even on the curved surfaces of the body. As a consequence, the results demonstrate the superiority of such a TEG to the most state of the art counterparts fabricated with no heat sink and offer a new train of thought for the development of self-sustained and unobtrusive wearable power suppliers which generate energy from low grade dissipated heat from the body.

Keywords: device simulation, flexible thermoelectric module, heat sink, human body heat

Procedia PDF Downloads 150
1645 Development and Validation of Selective Methods for Estimation of Valaciclovir in Pharmaceutical Dosage Form

Authors: Eman M. Morgan, Hayam M. Lotfy, Yasmin M. Fayez, Mohamed Abdelkawy, Engy Shokry

Abstract:

Two simple, selective, economic, safe, accurate, precise and environmentally friendly methods were developed and validated for the quantitative determination of valaciclovir (VAL) in the presence of its related substances R1 (acyclovir), R2 (guanine) in bulk powder and in the commercial pharmaceutical product containing the drug. Method A is a colorimetric method where VAL selectively reacts with ferric hydroxamate and the developed color was measured at 490 nm over a concentration range of 0.4-2 mg/mL with percentage recovery 100.05 ± 0.58 and correlation coefficient 0.9999. Method B is a reversed phase ultra performance liquid chromatographic technique (UPLC) which is considered superior in technology to the high-performance liquid chromatography with respect to speed, resolution, solvent consumption, time, and cost of analysis. Efficient separation was achieved on Agilent Zorbax CN column using ammonium acetate (0.1%) and acetonitrile as a mobile phase in a linear gradient program. Elution time for the separation was less than 5 min and ultraviolet detection was carried out at 256 nm over a concentration range of 2-50 μg/mL with mean percentage recovery 100.11±0.55 and correlation coefficient 0.9999. The proposed methods were fully validated as per International Conference on Harmonization specifications and effectively applied for the analysis of valaciclovir in pure form and tablets dosage form. Statistical comparison of the results obtained by the proposed and official or reported methods revealed no significant difference in the performance of these methods regarding the accuracy and precision respectively.

Keywords: hydroxamic acid, related substances, UPLC, valaciclovir

Procedia PDF Downloads 245
1644 Mapping Soils from Terrain Features: The Case of Nech SAR National Park of Ethiopia

Authors: Shetie Gatew

Abstract:

Current soil maps of Ethiopia do not represent accurately the soils of Nech Sar National Park. In the framework of studies on the ecology of the park, we prepared a soil map based on field observations and a digital terrain model derived from SRTM data with a 30-m resolution. The landscape comprises volcanic cones, lava and basalt outflows, undulating plains, horsts, alluvial plains and river deltas. SOTER-like terrain mapping units were identified. First, the DTM was classified into 128 terrain classes defined by slope gradient (4 classes), relief intensity (4 classes), potential drainage density (2 classes), and hypsometry (4 classes). A soil-landscape relation between the terrain mapping units and WRB soil units was established based on 34 soil profile pits. Based on this relation, the terrain mapping units were either merged or split to represent a comprehensive soil and terrain map. The soil map indicates that Leptosols (30 %), Cambisols (26%), Andosols (21%), Fluvisols (12 %), and Vertisols (9%) are the most widespread Reference Soil Groups of the park. In contrast, the harmonized soil map of Africa derived from the FAO soil map of the world indicates that Luvisols (70%), Vertisols (14%) and Fluvisols (16%) would be the most common Reference Soil Groups. However, these latter mapping units are not consistent with the topography, nor did we find such extensive areas occupied by Luvisols during the field survey. This case study shows that with the now freely available SRTM data, it is possible to improve current soil information layers with relatively limited resources, even in a complex terrain like Nech Sar National Park.

Keywords: andosols, cambisols, digital elevation model, leptosols, soil-landscaps relation

Procedia PDF Downloads 103
1643 The Impacts of Export in Stimulating Economic Growth in Ethiopia: ARDL Model Analysis

Authors: Natnael Debalklie Teshome

Abstract:

The purpose of the study was to empirically investigate the impacts of export performance and its volatility on economic growth in the Ethiopian economy. To do so, time-series data of the sample period from 1974/75 – 2017/18 were collected from databases and annual reports of IMF, WB, NBE, MoFED, UNCTD, and EEA. The extended Cobb-Douglas production function of the neoclassical growth model framed under the endogenous growth theory was used to consider both the performance and instability aspects of export. First, the unit root test was conducted using ADF and PP tests, and data were found in stationery with a mix of I(0) and I(1). Then, the bound test and Wald test were employed, and results showed that there exists long-run co-integration among study variables. All the diagnostic test results also reveal that the model fulfills the criteria of the best-fitted model. Therefore, the ARDL model and VECM were applied to estimate the long-run and short-run parameters, while the Granger causality test was used to test the causality between study variables. The empirical findings of the study reveal that only export and coefficient of variation had significant positive and negative impacts on RGDP in the long run, respectively, while other variables were found to have an insignificant impact on the economic growth of Ethiopia. In the short run, except for gross capital formation and coefficients of variation, which have a highly significant positive impact, all other variables have a strongly significant negative impact on RGDP. This shows exports had a strong, significant impact in both the short-run and long-run periods. However, its positive and statistically significant impact is observed only in the long run. Similarly, there was a highly significant export fluctuation in both periods, while significant commodity concentration (CCI) was observed only in the short run. Moreover, the Granger causality test reveals that unidirectional causality running from export performance to RGDP exists in the long run and from both export and RGDP to CCI in the short run. Therefore, the export-led growth strategy should be sustained and strengthened. In addition, boosting the industrial sector is vital to bring structural transformation. Hence, the government has to give different incentive schemes and supportive measures to exporters to extract the spillover effects of exports. Greater emphasis on price-oriented diversification and specialization on major primary products that the country has a comparative advantage should also be given to reduce value-based instability in the export earnings of the country. The government should also strive to increase capital formation and human capital development via enhancing investments in technology and quality of education to accelerate the economic growth of the country.

Keywords: export, economic growth, export diversification, instability, co-integration, granger causality, Ethiopian economy

Procedia PDF Downloads 77
1642 Grid Computing for Multi-Objective Optimization Problems

Authors: Aouaouche Elmaouhab, Hassina Beggar

Abstract:

Solving multi-objective discrete optimization applications has always been limited by the resources of one machine: By computing power or by memory, most often both. To speed up the calculations, the grid computing represents a primary solution for the treatment of these applications through the parallelization of these resolution methods. In this work, we are interested in the study of some methods for solving multiple objective integer linear programming problem based on Branch-and-Bound and the study of grid computing technology. This study allowed us to propose an implementation of the method of Abbas and Al on the grid by reducing the execution time. To enhance our contribution, the main results are presented.

Keywords: multi-objective optimization, integer linear programming, grid computing, parallel computing

Procedia PDF Downloads 482
1641 Ensuring Cyber Security Using Kippo Honeypots

Authors: S. Vivekananda Pandian

Abstract:

A major challenging task in this current scenario is protecting your computer and other electronic gadgets against Cyber-attacks. In this current era Cyber warfare becomes a major threat to the entire world which targets a particular organization or a country spreading the Malwares, Breaching the securities, causing major loss to the organization. Several sectors both public and private are computerized such as Energy sectors, Oil refinery sectors, Defense sectors and Aviation sectors are prone to attacks. Several attacks are unknown while accessing the internet. To study the characteristics and Intention of the Attacker Kippo Honeypots are used. Honeypots are the trap set by us which enables them to monitor the malicious activities and detailed study about attackers which leads to strengthening of the security.

Keywords: attackers, security, Kippo Honeypots, virtual machine

Procedia PDF Downloads 426
1640 Artificial Neural Network Model Based Setup Period Estimation for Polymer Cutting

Authors: Zsolt János Viharos, Krisztián Balázs Kis, Imre Paniti, Gábor Belső, Péter Németh, János Farkas

Abstract:

The paper presents the results and industrial applications in the production setup period estimation based on industrial data inherited from the field of polymer cutting. The literature of polymer cutting is very limited considering the number of publications. The first polymer cutting machine is known since the second half of the 20th century; however, the production of polymer parts with this kind of technology is still a challenging research topic. The products of the applying industrial partner must met high technical requirements, as they are used in medical, measurement instrumentation and painting industry branches. Typically, 20% of these parts are new work, which means every five years almost the entire product portfolio is replaced in their low series manufacturing environment. Consequently, it requires a flexible production system, where the estimation of the frequent setup periods' lengths is one of the key success factors. In the investigation, several (input) parameters have been studied and grouped to create an adequate training information set for an artificial neural network as a base for the estimation of the individual setup periods. In the first group, product information is collected such as the product name and number of items. The second group contains material data like material type and colour. In the third group, surface quality and tolerance information are collected including the finest surface and tightest (or narrowest) tolerance. The fourth group contains the setup data like machine type and work shift. One source of these parameters is the Manufacturing Execution System (MES) but some data were also collected from Computer Aided Design (CAD) drawings. The number of the applied tools is one of the key factors on which the industrial partners’ estimations were based previously. The artificial neural network model was trained on several thousands of real industrial data. The mean estimation accuracy of the setup periods' lengths was improved by 30%, and in the same time the deviation of the prognosis was also improved by 50%. Furthermore, an investigation on the mentioned parameter groups considering the manufacturing order was also researched. The paper also highlights the manufacturing introduction experiences and further improvements of the proposed methods, both on the shop floor and on the quotation preparation fields. Every week more than 100 real industrial setup events are given and the related data are collected.

Keywords: artificial neural network, low series manufacturing, polymer cutting, setup period estimation

Procedia PDF Downloads 243
1639 Advanced Techniques in Semiconductor Defect Detection: An Overview of Current Technologies and Future Trends

Authors: Zheng Yuxun

Abstract:

This review critically assesses the advancements and prospective developments in defect detection methodologies within the semiconductor industry, an essential domain that significantly affects the operational efficiency and reliability of electronic components. As semiconductor devices continue to decrease in size and increase in complexity, the precision and efficacy of defect detection strategies become increasingly critical. Tracing the evolution from traditional manual inspections to the adoption of advanced technologies employing automated vision systems, artificial intelligence (AI), and machine learning (ML), the paper highlights the significance of precise defect detection in semiconductor manufacturing by discussing various defect types, such as crystallographic errors, surface anomalies, and chemical impurities, which profoundly influence the functionality and durability of semiconductor devices, underscoring the necessity for their precise identification. The narrative transitions to the technological evolution in defect detection, depicting a shift from rudimentary methods like optical microscopy and basic electronic tests to more sophisticated techniques including electron microscopy, X-ray imaging, and infrared spectroscopy. The incorporation of AI and ML marks a pivotal advancement towards more adaptive, accurate, and expedited defect detection mechanisms. The paper addresses current challenges, particularly the constraints imposed by the diminutive scale of contemporary semiconductor devices, the elevated costs associated with advanced imaging technologies, and the demand for rapid processing that aligns with mass production standards. A critical gap is identified between the capabilities of existing technologies and the industry's requirements, especially concerning scalability and processing velocities. Future research directions are proposed to bridge these gaps, suggesting enhancements in the computational efficiency of AI algorithms, the development of novel materials to improve imaging contrast in defect detection, and the seamless integration of these systems into semiconductor production lines. By offering a synthesis of existing technologies and forecasting upcoming trends, this review aims to foster the dialogue and development of more effective defect detection methods, thereby facilitating the production of more dependable and robust semiconductor devices. This thorough analysis not only elucidates the current technological landscape but also paves the way for forthcoming innovations in semiconductor defect detection.

Keywords: semiconductor defect detection, artificial intelligence in semiconductor manufacturing, machine learning applications, technological evolution in defect analysis

Procedia PDF Downloads 48
1638 Interlayer-Mechanical Working: Effective Strategy to Mitigate Solidification Cracking in Wire-Arc Additive Manufacturing (WAAM) of Fe-based Shape Memory Alloy

Authors: Soumyajit Koley, Kuladeep Rajamudili, Supriyo Ganguly

Abstract:

In recent years, iron-based shape-memory alloys have been emerging as an inexpensive alternative to costly Ni-Ti alloy and thus considered suitable for many different applications in civil structures. Fe-17Mn-10Cr-5Si-4Ni-0.5V-0.5C alloy contains 37 wt.% of total solute elements. Such complex multi-component metallurgical system often leads to severe solute segregation and solidification cracking. Wire-arc additive manufacturing (WAAM) of Fe-17Mn-10Cr-5Si-4Ni-0.5V-0.5C alloy was attempted using a cold-wire fed plasma arc torch attached to a 6-axis robot. Self-standing walls were manufactured. However, multiple vertical cracks were observed after deposition of around 15 layers. Microstructural characterization revealed open surfaces of dendrites inside the crack, confirming these cracks as solidification cracks. Machine hammer peening (MHP) process was adopted on each layer to cold work the newly deposited alloy. Effect of MHP traverse speed were varied systematically to attain a window of operation where cracking was completely stopped. Microstructural and textural analysis were carried out further to correlate the peening process to microstructure.MHP helped in many ways. Firstly, a compressive residual stress was induced on each layer which countered the tensile residual stress evolved from solidification process; thus, reducing net tensile stress on the wall along its length. Secondly, significant local plastic deformation from MHP followed by the thermal cycle induced by deposition of next layer resulted into a recovered and recrystallized equiaxed microstructure instead of long columnar grains along the vertical direction. This microstructural change increased the total crack propagation length and thus, the overall toughness. Thirdly, the inter-layer peening significantly reduced the strong cubic {001} crystallographic texture formed along the build direction. Cubic {001} texture promotes easy separation of planes and easy crack propagation. Thus reduction of cubic texture alleviates the chance of cracking.

Keywords: Iron-based shape-memory alloy, wire-arc additive manufacturing, solidification cracking, inter-layer cold working, machine hammer peening

Procedia PDF Downloads 71
1637 An Investigation of Machinability of Inconel 718 in EDM Using Different Cryogenic Treated Tools

Authors: Pradeep Joshi, Prashant Dhiman, Shiv Dayal Dhakad

Abstract:

Inconel 718 is a family if Nickel-Chromium based Superalloy; it has very high oxidation and corrosion resistance. Inconel 718 is widely being used in aerospace, engine, turbine etc. due to its high mechanical strength and creep resistance. Being widely used, its machining should be easy but in real its machining is very difficult, especially by using traditional machining methods. It becomes easy to machine only by using non Traditional machining such as EDM. During EDM machining there is wear of both tool and workpiece, the tool wear is undesired because it changes tool shape, geometry. To reduce the tool wear rate (TWR) cryogenic treatment is performed on tool before the machining operation. The machining performances of the process are to be evaluated in terms of MRR, TWR which are functions of Discharge current, Pulse on-time, Pulse Off-time.

Keywords: EDM, cyrogenic, TWR, MRR

Procedia PDF Downloads 453
1636 A Probabilistic View of the Spatial Pooler in Hierarchical Temporal Memory

Authors: Mackenzie Leake, Liyu Xia, Kamil Rocki, Wayne Imaino

Abstract:

In the Hierarchical Temporal Memory (HTM) paradigm the effect of overlap between inputs on the activation of columns in the spatial pooler is studied. Numerical results suggest that similar inputs are represented by similar sets of columns and dissimilar inputs are represented by dissimilar sets of columns. It is shown that the spatial pooler produces these results under certain conditions for the connectivity and proximal thresholds. Following the discussion of the initialization of parameters for the thresholds, corresponding qualitative arguments about the learning dynamics of the spatial pooler are discussed.

Keywords: hierarchical temporal memory, HTM, learning algorithms, machine learning, spatial pooler

Procedia PDF Downloads 343
1635 Optimization of Cutting Parameters during Machining of Fine Grained Cemented Carbides

Authors: Josef Brychta, Jiri Kratochvil, Marek Pagac

Abstract:

The group of progressive cutting materials can include non-traditional, emerging and less-used materials that can be an efficient use of cutting their lead to a quantum leap in the field of machining. This is essentially a “superhard” materials (STM) based on polycrystalline diamond (PCD) and polycrystalline cubic boron nitride (PCBN) cutting performance ceramics and development is constantly "perfecting" fine coated cemented carbides. The latter cutting materials are broken down by two parameters, toughness and hardness. A variation of alloying elements is always possible to improve only one of each parameter. Reducing the size of the core on the other hand doing achieves "contradictory" properties, namely to increase both hardness and toughness.

Keywords: grained cutting materials difficult to machine materials, optimum utilization, mechanic, manufacturing

Procedia PDF Downloads 299
1634 Unsupervised Learning of Spatiotemporally Coherent Metrics

Authors: Ross Goroshin, Joan Bruna, Jonathan Tompson, David Eigen, Yann LeCun

Abstract:

Current state-of-the-art classification and detection algorithms rely on supervised training. In this work we study unsupervised feature learning in the context of temporally coherent video data. We focus on feature learning from unlabeled video data, using the assumption that adjacent video frames contain semantically similar information. This assumption is exploited to train a convolutional pooling auto-encoder regularized by slowness and sparsity. We establish a connection between slow feature learning to metric learning and show that the trained encoder can be used to define a more temporally and semantically coherent metric.

Keywords: machine learning, pattern clustering, pooling, classification

Procedia PDF Downloads 454
1633 Free Raducal Scavenging Activity of Fractionated Extract and Structural Elucidation of Isolated Compounds from Hydrocotyl Bonariensis Comm. Ex Lam Leaves

Authors: Emmanuel O Ajani, Sabiu S, Mariam Zakari, Fisayo A Bamisaye

Abstract:

Hydrocotyl bonariensis is a plant which anticataractogenic potentials have been reported. In the present study an attempt was made to evaluate the in vitro antioxidant activity of the fractionates of the leaves extract and also characterize some of its chemical constituents. DPPH, H₂O₂, OH and NO free radical scavenging, metal chelating and reducing power activity was used to evaluate the antioxidant activity of the crude extract fractionates. Fresh leaves of Hydrocotyl bonariensis leaves were extracted in 70% methanol. The extract was partitioned with different solvent system of increasing polarity (n-hexane, chloroform, ethyl acetate methanol and water). Compounds were isolated from the aqueous practitionate using accelerated gradient chromatography, vacuum liquid chromatography, preparative TLC and conventional column chromatography. The presence of the chemical groups was established with HPLC and Fourier Transform Infra Red. The structures of isolated compounds were elucidated by spectroscopic study and chemical shifts. Data from the study indicates that all the fractionates contain compounds with free radical scavenging activity. This activity was more pronounced in the aqueous fractionate (DPPH IC₅₀, 0025 ± 0.011 mg/ml, metal chelating capacity 27.5%, OH- scavenging IC₅₀, 0.846 ± 0.037 mg/ml, H₂O₂ scavenging IC₅₀ 0.521 ± 0.015 mg/ml, reducing power IC₅₀ 0.248 ± 0.025 mg/ml and NO scavenging IC₅₀ 0.537 ± 0.038 mg/ml). Two compounds were isolated and when compared with data from the literature; the structures were suggestive of polyphenolic flavonoid, quercetin and 3-O-β-D-glucopyranosyl-sitosterol. The result indicates that H. bonariensis leaves contain bioactive compounds with antioxidant activity.

Keywords: antioxidant, cataract, free radical, flavonoids, hydrocotyl bonariensis

Procedia PDF Downloads 270
1632 Optimum Design of Dual-Purpose Outriggers in Tall Buildings

Authors: Jiwon Park, Jihae Hur, Kukjae Kim, Hansoo Kim

Abstract:

In this study, outriggers, which are horizontal structures connecting a building core to distant columns to increase the lateral stiffness of a tall building, are used to reduce differential axial shortening in a tall building. Therefore, the outriggers in tall buildings are used to serve the dual purposes of reducing the lateral displacement and reducing the differential axial shortening. Since the location of the outrigger greatly affects the effectiveness of the outrigger in terms of the lateral displacement at the top of the tall building and the maximum differential axial shortening, the optimum locations of the dual-purpose outriggers can be determined by an optimization method. Because the floors where the outriggers are installed are given as integer numbers, the conventional gradient-based optimization methods cannot be directly used. In this study, a piecewise quadratic interpolation method is used to resolve the integrality requirement posed by the optimum locations of the dual-purpose outriggers. The optimal solutions for the dual-purpose outriggers are searched by linear scalarization which is a popular method for multi-objective optimization problems. It was found that increasing the number of outriggers reduced the maximum lateral displacement and the maximum differential axial shortening. It was also noted that the optimum locations for reducing the lateral displacement and reducing the differential axial shortening were different. Acknowledgment: This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science and ICT (NRF-2017R1A2B4010043) and financially supported by Korea Ministry of Land, Infrastructure and Transport(MOLIT) as U-City Master and Doctor Course Grant Program.

Keywords: concrete structure, optimization, outrigger, tall building

Procedia PDF Downloads 175
1631 Design of a Drift Assist Control System Applied to Remote Control Car

Authors: Sheng-Tse Wu, Wu-Sung Yao

Abstract:

In this paper, a drift assist control system is proposed for remote control (RC) cars to get the perfect drift angle. A steering servo control scheme is given powerfully to assist the drift driving. A gyroscope sensor is included to detect the machine's tail sliding and to achieve a better automatic counter-steering to prevent RC car from spinning. To analysis tire traction and vehicle dynamics is used to obtain the dynamic track of RC cars. It comes with a control gain to adjust counter-steering amount according to the sensor condition. An illustrated example of 1:10 RC drift car is given and the real-time control algorithm is realized by Arduino Uno.

Keywords: drift assist control system, remote control cars, gyroscope, vehicle dynamics

Procedia PDF Downloads 395
1630 Challenges in Video Based Object Detection in Maritime Scenario Using Computer Vision

Authors: Dilip K. Prasad, C. Krishna Prasath, Deepu Rajan, Lily Rachmawati, Eshan Rajabally, Chai Quek

Abstract:

This paper discusses the technical challenges in maritime image processing and machine vision problems for video streams generated by cameras. Even well documented problems of horizon detection and registration of frames in a video are very challenging in maritime scenarios. More advanced problems of background subtraction and object detection in video streams are very challenging. Challenges arising from the dynamic nature of the background, unavailability of static cues, presence of small objects at distant backgrounds, illumination effects, all contribute to the challenges as discussed here.

Keywords: autonomous maritime vehicle, object detection, situation awareness, tracking

Procedia PDF Downloads 455
1629 Using Neural Networks for Click Prediction of Sponsored Search

Authors: Afroze Ibrahim Baqapuri, Ilya Trofimov

Abstract:

Sponsored search is a multi-billion dollar industry and makes up a major source of revenue for search engines (SE). Click-through-rate (CTR) estimation plays a crucial role for ads selection, and greatly affects the SE revenue, advertiser traffic and user experience. We propose a novel architecture of solving CTR prediction problem by combining artificial neural networks (ANN) with decision trees. First, we compare ANN with respect to other popular machine learning models being used for this task. Then we go on to combine ANN with MatrixNet (proprietary implementation of boosted trees) and evaluate the performance of the system as a whole. The results show that our approach provides a significant improvement over existing models.

Keywords: neural networks, sponsored search, web advertisement, click prediction, click-through rate

Procedia PDF Downloads 572
1628 Experimental Evaluation of UDP in Wireless LAN

Authors: Omar Imhemed Alramli

Abstract:

As Transmission Control Protocol (TCP), User Datagram Protocol (UDP) is transfer protocol in the transportation layer in Open Systems Interconnection model (OSI model) or in TCP/IP model of networks. The UDP aspects evaluation were not recognized by using the pcattcp tool on the windows operating system platform like TCP. The study has been carried out to find a tool which supports UDP aspects evolution. After the information collection about different tools, iperf tool was chosen and implemented on Cygwin tool which is installed on both Windows XP platform and also on Windows XP on virtual box machine on one computer only. Iperf is used to make experimental evaluation of UDP and to see what will happen during the sending the packets between the Host and Guest in wired and wireless networks. Many test scenarios have been done and the major UDP aspects such as jitter, packet losses, and throughput are evaluated.

Keywords: TCP, UDP, IPERF, wireless LAN

Procedia PDF Downloads 353
1627 Emotions Evoked by Robots - Comparison of Older Adults and Students

Authors: Stephanie Lehmann, Esther Ruf, Sabina Misoch

Abstract:

Background: Due to demographic change and shortage of skilled nursing staff, assistive robots are built to support older adults at home and nursing staff in care institutions. When assistive robots facilitate tasks that are usually performed by humans, user acceptance is essential. Even though they are an important aspect of acceptance, emotions towards different assistive robots and different situations of robot-use have so far not been examined in detail. The appearance of assistive robots can trigger emotions that affect their acceptance. Acceptance of robots is assumed to be greater when they look more human-like; however, too much human similarity can be counterproductive. Regarding different groups, it is assumed that older adults have a more negative attitude towards robots than younger adults. Within the framework of a simulated robot study, the aim was to investigate emotions of older adults compared to students towards robots with different appearances and in different situations and so contribute to a deeper view of the emotions influencing acceptance. Methods: In a questionnaire study, vignettes were used to assess emotions toward robots in different situations and of different appearance. The vignettes were composed of two situations (service and care) shown by video and four pictures of robots varying in human similarity (machine-like to android). The combination of the vignettes was randomly distributed to the participants. One hundred forty-two older adults and 35 bachelor students of nursing participated. They filled out a questionnaire that surveyed 30 positive and 30 negative emotions. For each group, older adults and students, a sum score of “positive emotions” and a sum score of “negative emotions” was calculated. Mean value, standard deviation, or n for sample size and % for frequencies, according to the scale level, were calculated. For differences in the scores of positive and negative emotions for different situations, t-tests were calculated. Results: Overall, older adults reported significantly more positive emotions than students towards robots in general. Students reported significantly more negative emotions than older adults. Regarding the two different situations, the results were similar for the care situation, with older adults reporting more positive emotions than students and less negative emotions than students. In the service situation, older adults reported significantly more positive emotions; negative emotions did not differ significantly from the students. Regarding the appearance of the robot, there were no significant differences in emotions reported towards the machine-like, the mechanical-human-like and the human-like appearance. Regarding the android robot, students reported significantly more negative emotions than older adults. Conclusion: There were differences in the emotions reported by older adults compared to students. Older adults reported more positive emotions, and students reported more negative emotions towards robots in different situations and with different appearances. It can be assumed that older adults have a different attitude towards the use of robots than younger people, especially young adults in the health sector. Therefore, the use of robots in the service or care sector should not be rejected rashly based on the attitudes of younger persons, without considering the attitudes of older adults equally.

Keywords: emotions, robots, seniors, young adults

Procedia PDF Downloads 464