Search results for: artificial intelligence in semiconductor manufacturing
3293 Design and Fabrication of Stiffness Reduced Metallic Locking Compression Plates through Topology Optimization and Additive Manufacturing
Authors: Abdulsalam A. Al-Tamimi, Chris Peach, Paulo Rui Fernandes, Paulo J. Bartolo
Abstract:
Bone fixation implants currently used to treat traumatic fractured bones and to promote fracture healing are built with biocompatible metallic materials such as stainless steel, cobalt chromium and titanium and its alloys (e.g., CoCrMo and Ti6Al4V). The noticeable stiffness mismatch between current metallic implants and host bone associates with negative outcomes such as stress shielding which causes bone loss and implant loosening leading to deficient fracture treatment. This paper, part of a major research program to design the next generation of bone fixation implants, describes the combined use of three-dimensional (3D) topology optimization (TO) and additive manufacturing powder bed technology (Electron Beam Melting) to redesign and fabricate the plates based on the current standard one (i.e., locking compression plate). Topology optimization is applied with an objective function to maximize the stiffness and constraint by volume reductions (i.e., 25-75%) in order to obtain optimized implant designs with reduced stress shielding phenomenon, under different boundary conditions (i.e., tension, bending, torsion and combined loads). The stiffness of the original and optimised plates are assessed through a finite-element study. The TO results showed actual reduction in the stiffness for most of the plates due to the critical values of volume reduction. Additionally, the optimized plates fabricated using powder bed techniques proved that the integration between the TO and additive manufacturing presents the capability of producing stiff reduced plates with acceptable tolerances.Keywords: additive manufacturing, locking compression plate, finite element, topology optimization
Procedia PDF Downloads 1993292 Fiber Stiffness Detection of GFRP Using Combined ABAQUS and Genetic Algorithms
Authors: Gyu-Dong Kim, Wuk-Jae Yoo, Sang-Youl Lee
Abstract:
Composite structures offer numerous advantages over conventional structural systems in the form of higher specific stiffness and strength, lower life-cycle costs, and benefits such as easy installation and improved safety. Recently, there has been a considerable increase in the use of composites in engineering applications and as wraps for seismic upgrading and repairs. However, these composites deteriorate with time because of outdated materials, excessive use, repetitive loading, climatic conditions, manufacturing errors, and deficiencies in inspection methods. In particular, damaged fibers in a composite result in significant degradation of structural performance. In order to reduce the failure probability of composites in service, techniques to assess the condition of the composites to prevent continual growth of fiber damage are required. Condition assessment technology and nondestructive evaluation (NDE) techniques have provided various solutions for the safety of structures by means of detecting damage or defects from static or dynamic responses induced by external loading. A variety of techniques based on detecting the changes in static or dynamic behavior of isotropic structures has been developed in the last two decades. These methods, based on analytical approaches, are limited in their capabilities in dealing with complex systems, primarily because of their limitations in handling different loading and boundary conditions. Recently, investigators have introduced direct search methods based on metaheuristics techniques and artificial intelligence, such as genetic algorithms (GA), simulated annealing (SA) methods, and neural networks (NN), and have promisingly applied these methods to the field of structural identification. Among them, GAs attract our attention because they do not require a considerable amount of data in advance in dealing with complex problems and can make a global solution search possible as opposed to classical gradient-based optimization techniques. In this study, we propose an alternative damage-detection technique that can determine the degraded stiffness distribution of vibrating laminated composites made of Glass Fiber-reinforced Polymer (GFRP). The proposed method uses a modified form of the bivariate Gaussian distribution function to detect degraded stiffness characteristics. In addition, this study presents a method to detect the fiber property variation of laminated composite plates from the micromechanical point of view. The finite element model is used to study free vibrations of laminated composite plates for fiber stiffness degradation. In order to solve the inverse problem using the combined method, this study uses only first mode shapes in a structure for the measured frequency data. In particular, this study focuses on the effect of the interaction among various parameters, such as fiber angles, layup sequences, and damage distributions, on fiber-stiffness damage detection.Keywords: stiffness detection, fiber damage, genetic algorithm, layup sequences
Procedia PDF Downloads 2743291 The Development Status of Terahertz Wave and Its Prospect in Wireless Communication
Authors: Yiquan Liao, Quanhong Jiang
Abstract:
Since terahertz was observed by German scientists, we have obtained terahertz through different generation technologies of broadband and narrowband. Then, with the development of semiconductor and other technologies, the imaging technology of terahertz has become increasingly perfect. From the earliest application of nondestructive testing in aviation to the present application of information transmission and human safety detection, the role of terahertz will shine in various fields. The weapons produced by terahertz were epoch-making, which is a crushing deterrent against technologically backward countries. At the same time, terahertz technology in the fields of imaging, medical and livelihood, communication and communication are for the well-being of the country and the people.Keywords: terahertz, imaging, communication, medical treatment
Procedia PDF Downloads 993290 Value Analysis Dashboard in Supply Chain Management, Real Case Study from Iran
Authors: Seyedehfatemeh Golrizgashti, Seyedali Dalil
Abstract:
The goal of this paper is proposing a supply chain value dashboard in home appliance manufacturing firms to create more value for all stakeholders via balanced scorecard approach. Balanced scorecard is an effective approach that managers have used to evaluate supply chain performance in many fields but there is a lack of enough attention to all supply chain stakeholders, improving value creation and, defining correlation between value indicators and performance measuring quantitatively. In this research the key stakeholders in home appliance supply chain, value indicators with respect to create more value for stakeholders and the most important metrics to evaluate supply chain value performance based on balanced scorecard approach have been selected via literature review. The most important indicators based on expert’s judgment acquired by in survey focused on creating more value for. Structural equation modelling has been used to disclose relations between value indicators and balanced scorecard metrics. The important result of this research is identifying effective value dashboard to create more value for all stakeholders in the supply chain via balanced scorecard approach and based on an empirical study covering ten home appliance manufacturing firms in Iran. Home appliance manufacturing firms can increase their stakeholder's satisfaction by using this value dashboard.Keywords: supply chain management, balanced scorecard, value, structural modeling, stakeholders
Procedia PDF Downloads 3513289 Craft Development in the 19th Century Sokoto City: A Lesson towards Economic Diversification in the 21st Century Nigeria
Authors: Nura Bello
Abstract:
Sokoto caliphate is the product of a hectic revolutionary movement that took place in the first decade of the nineteenth century under the leadership of Seikh Usmanu Danfodio. The movement led to the overthrow of the Sarauta system in Hausa Land, leading to the emergence of the Sokoto caliphate and Sokoto city as its headquarters. This development led to the collapse of Alkalawa (headquarters of the Gobir Kingdom) in 1808. A year later, in 1809, Sokoto city emerged as the headquarter of the Sokoto Caliphate. Due to the open door policy adopted by the leaders, the city came to experience an influx of people especially craft manufacturers from all over Hausa Land and beyond, who occupied many areas of the city and engaged in different craft production. This paper aims to highlight the development of crafts manufacturing in the 19th century Sokoto city and its contribution to transforming the city into a major economic base of the caliphate. In dealing with the above issue, qualitative research methods that mainly involve the use of oral and archival data will be adopted. Finally, the paper identified that the expansion of craft manufacturing in the 19th century Sokoto city could have resulted from the role played by the leadership of the caliphate, which favoured such development. The paper argues that if those industries were revived in Nigeria today, they would ultimately create various jobs among the younger generations, which would result in economic prosperity.Keywords: craft manufacturing, diversification, economy, migration
Procedia PDF Downloads 1103288 The Impacts of Soft and Hard Enterprise Resource Planning to the Corporate Business Performance through the Enterprise Resource Planning Integrated System
Authors: Sautma Ronni Basana, Zeplin Jiwa Husada Tarigan, Widjojo Suprapto
Abstract:
Companies have already implemented the Enterprise Resource Planning (ERP) system to increase the data integration so that they can improve their business performance. Although some companies have managed to implement the ERP well, they still need to improve gradually so that the ERP functions can be optimized. To obtain a faster and more accurate data, the key users and IT department have to customize the process to suit the needs of the company. In reality, sustaining the ERP technology system requires soft and hard ERP so it enables to improve the business performance of the company. Soft and hard ERP are needed to build a tough system to ensure the integration among departments running smoothly. This research has three questions. First, is the soft ERP bringing impacts to the hard ERP and system integration. Then, is the hard ERP having impacts to the system integration. Finally, is the business performance of the manufacturing companies is affected by the soft ERP, hard ERP, and system integration. The questionnaires are distributed to 100 manufacturing companies in East Java, and are collected from 90 companies which have implemented the ERP, with the response rate of 90%. From the data analysis using PLS program, it is obtained that the soft ERP brings positive impacts to the hard ERP and system integration for the companies. Then, the hard ERP brings also positive impacts to the system integration. Finally, the business process performance of the manufacturing companies is affected by the system integration, soft ERP, and hard ERP simultaneously.Keywords: soft ERP, hard ERP, system integration, business performance
Procedia PDF Downloads 4053287 Performing Diagnosis in Building with Partially Valid Heterogeneous Tests
Authors: Houda Najeh, Mahendra Pratap Singh, Stéphane Ploix, Antoine Caucheteux, Karim Chabir, Mohamed Naceur Abdelkrim
Abstract:
Building system is highly vulnerable to different kinds of faults and human misbehaviors. Energy efficiency and user comfort are directly targeted due to abnormalities in building operation. The available fault diagnosis tools and methodologies particularly rely on rules or pure model-based approaches. It is assumed that model or rule-based test could be applied to any situation without taking into account actual testing contexts. Contextual tests with validity domain could reduce a lot of the design of detection tests. The main objective of this paper is to consider fault validity when validate the test model considering the non-modeled events such as occupancy, weather conditions, door and window openings and the integration of the knowledge of the expert on the state of the system. The concept of heterogeneous tests is combined with test validity to generate fault diagnoses. A combination of rules, range and model-based tests known as heterogeneous tests are proposed to reduce the modeling complexity. Calculation of logical diagnoses coming from artificial intelligence provides a global explanation consistent with the test result. An application example shows the efficiency of the proposed technique: an office setting at Grenoble Institute of Technology.Keywords: heterogeneous tests, validity, building system, sensor grids, sensor fault, diagnosis, fault detection and isolation
Procedia PDF Downloads 2943286 Medical Neural Classifier Based on Improved Genetic Algorithm
Authors: Fadzil Ahmad, Noor Ashidi Mat Isa
Abstract:
This study introduces an improved genetic algorithm procedure that focuses search around near optimal solution corresponded to a group of elite chromosome. This is achieved through a novel crossover technique known as Segmented Multi Chromosome Crossover. It preserves the highly important information contained in a gene segment of elite chromosome and allows an offspring to carry information from gene segment of multiple chromosomes. In this way the algorithm has better possibility to effectively explore the solution space. The improved GA is applied for the automatic and simultaneous parameter optimization and feature selection of artificial neural network in pattern recognition of medical problem, the cancer and diabetes disease. The experimental result shows that the average classification accuracy of the cancer and diabetes dataset has improved by 0.1% and 0.3% respectively using the new algorithm.Keywords: genetic algorithm, artificial neural network, pattern clasification, classification accuracy
Procedia PDF Downloads 4743285 Wolof Voice Response Recognition System: A Deep Learning Model for Wolof Audio Classification
Authors: Krishna Mohan Bathula, Fatou Bintou Loucoubar, FNU Kaleemunnisa, Christelle Scharff, Mark Anthony De Castro
Abstract:
Voice recognition algorithms such as automatic speech recognition and text-to-speech systems with African languages can play an important role in bridging the digital divide of Artificial Intelligence in Africa, contributing to the establishment of a fully inclusive information society. This paper proposes a Deep Learning model that can classify the user responses as inputs for an interactive voice response system. A dataset with Wolof language words ‘yes’ and ‘no’ is collected as audio recordings. A two stage Data Augmentation approach is adopted for enhancing the dataset size required by the deep neural network. Data preprocessing and feature engineering with Mel-Frequency Cepstral Coefficients are implemented. Convolutional Neural Networks (CNNs) have proven to be very powerful in image classification and are promising for audio processing when sounds are transformed into spectra. For performing voice response classification, the recordings are transformed into sound frequency feature spectra and then applied image classification methodology using a deep CNN model. The inference model of this trained and reusable Wolof voice response recognition system can be integrated with many applications associated with both web and mobile platforms.Keywords: automatic speech recognition, interactive voice response, voice response recognition, wolof word classification
Procedia PDF Downloads 1163284 Efficiency and Reliability Analysis of SiC-Based and Si-Based DC-DC Buck Converters in Thin-Film PV Systems
Authors: Elaid Bouchetob, Bouchra Nadji
Abstract:
This research paper compares the efficiency and reliability (R(t)) of SiC-based and Si-based DC-DC buck converters in thin layer PV systems with an AI-based MPPT controller. Using Simplorer/Simulink simulations, the study assesses their performance under varying conditions. Results show that the SiC-based converter outperforms the Si-based one in efficiency and cost-effectiveness, especially in high temperature and low irradiance conditions. It also exhibits superior reliability, particularly at high temperature and voltage. Reliability calculation (R(t)) is analyzed to assess system performance over time. The SiC-based converter demonstrates better reliability, considering factors like component failure rates and system lifetime. The research focuses on the buck converter's role in charging a Lithium battery within the PV system. By combining the SiC-based converter and AI-based MPPT controller, higher charging efficiency, improved reliability, and cost-effectiveness are achieved. The SiC-based converter proves superior under challenging conditions, emphasizing its potential for optimizing PV system charging. These findings contribute insights into the efficiency, reliability, and reliability calculation of SiC-based and Si-based converters in PV systems. SiC technology's advantages, coupled with advanced control strategies, promote efficient and sustainable energy storage using Lithium batteries. The research supports PV system design and optimization for reliable renewable energy utilization.Keywords: efficiency, reliability, artificial intelligence, sic device, thin layer, buck converter
Procedia PDF Downloads 623283 Synthesis, Characterization and Photocatalytic Applications of Ag-Doped-SnO₂ Nanoparticles by Sol-Gel Method
Authors: M. S. Abd El-Sadek, M. A. Omar, Gharib M. Taha
Abstract:
In recent years, photocatalytic degradation of various kinds of organic and inorganic pollutants using semiconductor powders as photocatalysts has been extensively studied. Owing to its relatively high photocatalytic activity, biological and chemical stability, low cost, nonpoisonous and long stable life, Tin oxide materials have been widely used as catalysts in chemical reactions, including synthesis of vinyl ketone, oxidation of methanol and so on. Tin oxide (SnO₂), with a rutile-type crystalline structure, is an n-type wide band gap (3.6 eV) semiconductor that presents a proper combination of chemical, electronic and optical properties that make it advantageous in several applications. In the present work, SnO₂ nanoparticles were synthesized at room temperature by the sol-gel process and thermohydrolysis of SnCl₂ in isopropanol by controlling the crystallite size through calculations. The synthesized nanoparticles were identified by using XRD analysis, TEM, FT-IR, and Uv-Visible spectroscopic techniques. The crystalline structure and grain size of the synthesized samples were analyzed by X-Ray diffraction analysis (XRD) and the XRD patterns confirmed the presence of tetragonal phase SnO₂. In this study, Methylene blue degradation was tested by using SnO₂ nanoparticles (at different calculations temperatures) as a photocatalyst under sunlight as a source of irradiation. The results showed that the highest percentage of degradation of Methylene blue dye was obtained by using SnO₂ photocatalyst at calculations temperature 800 ᵒC. The operational parameters were investigated to be optimized to the best conditions which result in complete removal of organic pollutants from aqueous solution. It was found that the degradation of dyes depends on several parameters such as irradiation time, initial dye concentration, the dose of the catalyst and the presence of metals such as silver as a dopant and its concentration. Percent degradation was increased with irradiation time. The degradation efficiency decreased as the initial concentration of the dye increased. The degradation efficiency increased as the dose of the catalyst increased to a certain level and by further increasing the SnO₂ photocatalyst dose, the degradation efficiency is decreased. The best degradation efficiency on which obtained from pure SnO₂ compared with SnO₂ which doped by different percentage of Ag.Keywords: SnO₂ nanoparticles, a sol-gel method, photocatalytic applications, methylene blue, degradation efficiency
Procedia PDF Downloads 1523282 Aggression Related Trauma and Coping among University Students, Exploring Emotional Intelligence Applications on Coping with Aggression Related Trauma
Authors: Asanka Bulathwatta
Abstract:
This Study tries to figure out the role of emotional Intelligence for developing coping strategies among adolescents who face traumatic events. Late adolescence students who have enrolled into the University education (Bachelor students/first-year students) would be selected as the sample. University education is an important stage of students’ academic life. Therefore, all students need to develop their competencies to attain the goal of passing examinations and also to developing their wisdom related to the scientific knowledge they gathered through their academic life. Study to be conducted in a cross-cultural manner and it will be taking place in Germany and Sri Lanka. The sample will be consisting of 200 students from each country. Late adolescence is a critical period of the human being as it is foot step in their life which acquiring the emotional and social qualities in their social life. There are many adolescents who have affected by aggression related traumatic events during their lifespan but have not been identified or treated. More specifically, there are numerous burning issues within the first year of the university students namely, ragging done by seniors to juniors, bulling, invalidation and issues raise based on attitudes changes and orientation issues. Those factors can be traumatic for both their academic and day to day lifestyle. Identifying the students who are with emotional damages and their resiliency afterward the aggression related traumas and effective rehabilitation from the traumatic events is immensely needed in order to facilitate university students for their academic achievements and social life within the University education. Research findings in Germany show that students shows more interpersonal traumas, life-threatening illnesses and death of someone related are common in German sample.Keywords: emotional intelligence, agression, trauma, coping
Procedia PDF Downloads 4723281 Statistical Process Control in Manufacturing, a Case Study on an Iranian Automobile Company
Authors: M. E. Khiav, D. J. Borah, H. T. S. Santos, V. T. Faria
Abstract:
For automobile companies, it has become very important to ensure sound quality in manufacturing and assembling in order to prevent occurrence of defects and to reduce the amount of parts replacements to be done in the service centers during the warranty period. Statistical Process Control (SPC) is widely used as the tool to analyze the quality of such processes and plays a significant role in the improvement of the processes by identifying the patterns and the location of the defects. In this paper, a case study has been conducted on an Iranian automobile company. This paper performs a quality analysis of a particular component called “Internal Bearing for the Back Wheel” of a particular car model, manufactured by the company, based on the 10 million data received from its service centers located all over the country. By creating control charts including X bar–S charts and EWMA charts, it has been observed after the year 2009, the specific component underwent frequent failures and there has been a sharp dip in the average distance covered by the cars till the specific component requires replacement/maintenance. Correlation analysis was performed to find out the reasons that might have affected the quality of the specific component in all the cars produced by the company after the year 2009. Apart from manufacturing issues, some political and environmental factors have been identified to have a potential impact on the quality of the component. A maiden attempt has been made to analyze the quality issues within an Iranian automobile manufacturer; such issues often get neglected in developing countries. The paper also discusses the possibility of political scenario of Iran and the country’s environmental conditions affecting the quality of the end products, which not only strengthens the extant literature but also provides a new direction for future research.Keywords: capability analysis, car manufacturing, statistical process control, quality control, quality tools
Procedia PDF Downloads 3803280 Multiple Intelligence Theory with a View to Designing a Classroom for the Future
Authors: Phalaunnaphat Siriwongs
Abstract:
The classroom of the 21st century is an ever-changing forum for new and innovative thoughts and ideas. With increasing technology and opportunity, students have rapid access to information that only decades ago would have taken weeks to obtain. Unfortunately, new techniques and technology are not a cure for the fundamental problems that have plagued the classroom ever since education was established. Class size has been an issue long debated in academia. While it is difficult to pinpoint an exact number, it is clear that in this case, more does not mean better. By looking into the success and pitfalls of classroom size, the true advantages of smaller classes becomes clear. Previously, one class was comprised of 50 students. Since they were seventeen- and eighteen-year-old students, it was sometimes quite difficult for them to stay focused. To help students understand and gain much knowledge, a researcher introduced “The Theory of Multiple Intelligence” and this, in fact, enabled students to learn according to their own learning preferences no matter how they were being taught. In this lesson, the researcher designed a cycle of learning activities involving all intelligences so that everyone had equal opportunities to learn.Keywords: multiple intelligences, role play, performance assessment, formative assessment
Procedia PDF Downloads 2833279 The Role of Executive Functions and Emotional Intelligence in Leadership: A Neuropsychological Perspective
Authors: Chrysovalanto Sofia Karatosidi, Dimitra Iordanoglou
Abstract:
The overlap of leadership skills with personality traits, beliefs, values, and the integration of cognitive abilities, analytical and critical thinking skills into leadership competencies raises the need to segregate further and investigate them. Hence, the domains of cognitive functions that contribute to leadership effectiveness should also be identified. Organizational cognitive neuroscience and neuroleadership can shed light on the study of these critical leadership skills. As the first part of our research, this pilot study aims to explore the relationships between higher-order cognitive functions (executive functions), trait emotional intelligence (EI), personality, and general cognitive ability in leadership. Twenty-six graduate and postgraduate students were assessed on neuropsychological tests that measure important aspects of executive functions (EF) and completed self-reported questionnaires about trait EI, personality, leadership styles, and leadership effectiveness. Specifically, we examined four core EF—fluency (phonemic and semantic), information updating and monitoring, working memory, and inhibition of prepotent responses. Leadership effectiveness was positively associated with phonemic fluency (PF), which involves mental flexibility, in turn, an increasingly important ability for future leaders in this rapidly changing world. Transformational leadership was positively associated with trait EI, extraversion, and openness to experience, a result that is following previous findings. The relationship between specific EF constructs and leadership effectiveness emphasizes the role of higher-order cognitive functions in the field of leadership as an individual difference. EF brings a new perspective into leadership literature by providing a direct, non-invasive, scientifically-valid connection between brain function and leadership behavior.Keywords: cognitive neuroscience, emotional intelligence, executive functions, leadership
Procedia PDF Downloads 1583278 A Large Language Model-Driven Method for Automated Building Energy Model Generation
Authors: Yake Zhang, Peng Xu
Abstract:
The development of building energy models (BEM) required for architectural design and analysis is a time-consuming and complex process, demanding a deep understanding and proficient use of simulation software. To streamline the generation of complex building energy models, this study proposes an automated method for generating building energy models using a large language model and the BEM library aimed at improving the efficiency of model generation. This method leverages a large language model to parse user-specified requirements for target building models, extracting key features such as building location, window-to-wall ratio, and thermal performance of the building envelope. The BEM library is utilized to retrieve energy models that match the target building’s characteristics, serving as reference information for the large language model to enhance the accuracy and relevance of the generated model, allowing for the creation of a building energy model that adapts to the user’s modeling requirements. This study enables the automatic creation of building energy models based on natural language inputs, reducing the professional expertise required for model development while significantly decreasing the time and complexity of manual configuration. In summary, this study provides an efficient and intelligent solution for building energy analysis and simulation, demonstrating the potential of a large language model in the field of building simulation and performance modeling.Keywords: artificial intelligence, building energy modelling, building simulation, large language model
Procedia PDF Downloads 263277 Mending Broken Fences Policing: Developing the Intelligence-Led/Community-Based Policing Model(IP-CP) and Quality/Quantity/Crime(QQC) Model
Authors: Anil Anand
Abstract:
Despite enormous strides made during the past decade, particularly with the adoption and expansion of community policing, there remains much that police leaders can do to improve police-public relations. The urgency is particularly evident in cities across the United States and Europe where an increasing number of police interactions over the past few years have ignited large, sometimes even national, protests against police policy and strategy, highlighting a gap between what police leaders feel they have archived in terms of public satisfaction, support, and legitimacy and the perception of bias among many marginalized communities. The decision on which one policing strategy is chosen over another, how many resources are allocated, and how strenuously the policy is applied resides primarily with the police and the units and subunits tasked with its enforcement. The scope and opportunity for police officers in impacting social attitudes and social policy are important elements that cannot be overstated. How do police leaders, for instance, decide when to apply one strategy—say community-based policing—over another, like intelligence-led policing? How do police leaders measure performance and success? Should these measures be based on quantitative preferences over qualitative, or should the preference be based on some other criteria? And how do police leaders define, allow, and control discretionary decision-making? Mending Broken Fences Policing provides police and security services leaders with a model based on social cohesion, that incorporates intelligence-led and community policing (IP-CP), supplemented by a quality/quantity/crime (QQC) framework to provide a four-step process for the articulable application of police intervention, performance measurement, and application of discretion.Keywords: social cohesion, quantitative performance measurement, qualitative performance measurement, sustainable leadership
Procedia PDF Downloads 2953276 Comparison of the Material Response Based on Production Technologies of Metal Foams
Authors: Tamas Mankovits
Abstract:
Lightweight cellular-type structures like metal foams have excellent mechanical properties, therefore the interest in these materials is widely spreading as load-bearing structural elements, e.g. as implants. Numerous technologies are available to produce metal foams. In this paper the material response of closed cell foam structures produced by direct foaming and additive technology is compared. The production technology circumstances are also investigated. Geometrical variations are developed for foam structures produced by additive manufacturing and simulated by finite element method to be able to predict the mechanical behavior.Keywords: additive manufacturing, direct foaming, finite element method, metal foam
Procedia PDF Downloads 1973275 Dissolved Gas Analysis Based Regression Rules from Trained ANN for Transformer Fault Diagnosis
Authors: Deepika Bhalla, Raj Kumar Bansal, Hari Om Gupta
Abstract:
Dissolved Gas Analysis (DGA) has been widely used for fault diagnosis in a transformer. Artificial neural networks (ANN) have high accuracy but are regarded as black boxes that are difficult to interpret. For many problems it is desired to extract knowledge from trained neural networks (NN) so that the user can gain a better understanding of the solution arrived by the NN. This paper applies a pedagogical approach for rule extraction from function approximating neural networks (REFANN) with application to incipient fault diagnosis using the concentrations of the dissolved gases within the transformer oil, as the input to the NN. The input space is split into subregions and for each subregion there is a linear equation that is used to predict the type of fault developing within a transformer. The experiments on real data indicate that the approach used can extract simple and useful rules and give fault predictions that match the actual fault and are at times also better than those predicted by the IEC method.Keywords: artificial neural networks, dissolved gas analysis, rules extraction, transformer
Procedia PDF Downloads 5363274 Differences in Parental Acceptance, Rejection, and Attachment and Associations with Adolescent Emotional Intelligence and Life Satisfaction
Authors: Diana Coyl-Shepherd, Lisa Newland
Abstract:
Research and theory suggest that parenting and parent-child attachment influence emotional development and well-being. Studies indicate that adolescents often describe differences in relationships with each parent and may form different types of attachment to mothers and fathers. During adolescence and young adulthood, romantic partners may also become attachment figures, influencing well being, and providing a relational context for emotion skill development. Mothers, however, tend to be remain the primary attachment figure; fathers and romantic partners are more likely to be secondary attachment figures. The following hypotheses were tested: 1) participants would rate mothers as more accepting and less rejecting than fathers, 2) participants would rate secure attachment to mothers higher and insecure attachment lower compared to father and romantic partner, 3) parental rejection and insecure attachment would be negatively related to life satisfaction and emotional intelligence, and 4) secure attachment and parental acceptance would be positively related life satisfaction and emotional intelligence. After IRB and informed consent, one hundred fifty adolescents and young adults (ages 11-28, M = 19.64; 71% female) completed an online survey. Measures included parental acceptance, rejection, attachment (i.e., secure, dismissing, and preoccupied), emotional intelligence (i.e., seeking and providing comfort, use, and understanding of self emotions, expressing warmth, understanding and responding to others’ emotional needs), and well-being (i.e., self-confidence and life satisfaction). As hypothesized, compared to fathers’, mothers’ acceptance was significantly higher t (190) = 3.98, p = .000 and rejection significantly lower t (190) = - 4.40, p = .000. Group differences in secure attachment were significant, f (2, 389) = 40.24, p = .000; post-hoc analyses revealed significant differences between mothers and fathers and between mothers and romantic partners; mothers had the highest mean score. Group differences in preoccupied attachment were significant, f (2, 388) = 13.37, p = .000; post-hoc analyses revealed significant differences between mothers and romantic partners, and between fathers and romantic partners; mothers have the lowest mean score. However, group differences in dismissing attachment were not significant, f (2, 389) = 1.21, p = .30; scores for mothers and romantic partners were similar; father means score was highest. For hypotheses 3 and 4 significant negative correlations were found between life satisfaction and dismissing parent, and romantic attachment, preoccupied father and romantic attachment, and mother and father rejection variables; secure attachment variables and parental acceptance were positively correlated with life satisfaction. Self-confidence was correlated only with mother acceptance. For emotional intelligence, seeking and providing comfort were negatively correlated with parent dismissing and mother rejection; secure mother and romantic attachment and mother acceptance were positively correlated with these variables. Use and understanding of self-emotions were negatively correlated with parent and partner dismissing attachment, and parent rejection; romantic secure attachment and parent acceptance were positively correlated. Expressing warmth was negatively correlated with dismissing attachment variables, romantic preoccupied attachment, and parent rejection; whereas attachment secure variables were positively associated. Understanding and responding to others’ emotional needs were correlated with parent dismissing and preoccupied attachment variables and mother rejection; only secure father attachment was positively correlated.Keywords: adolescent emotional intelligence, life satisfaction, parent and romantic attachment, parental rejection and acceptance
Procedia PDF Downloads 1923273 Reconstruction Spectral Reflectance Cube Based on Artificial Neural Network for Multispectral Imaging System
Authors: Iwan Cony Setiadi, Aulia M. T. Nasution
Abstract:
The multispectral imaging (MSI) technique has been used for skin analysis, especially for distant mapping of in-vivo skin chromophores by analyzing spectral data at each reflected image pixel. For ergonomic purpose, our multispectral imaging system is decomposed in two parts: a light source compartment based on LED with 11 different wavelenghts and a monochromatic 8-Bit CCD camera with C-Mount Objective Lens. The software based on GUI MATLAB to control the system was also developed. Our system provides 11 monoband images and is coupled with a software reconstructing hyperspectral cubes from these multispectral images. In this paper, we proposed a new method to build a hyperspectral reflectance cube based on artificial neural network algorithm. After preliminary corrections, a neural network is trained using the 32 natural color from X-Rite Color Checker Passport. The learning procedure involves acquisition, by a spectrophotometer. This neural network is then used to retrieve a megapixel multispectral cube between 380 and 880 nm with a 5 nm resolution from a low-spectral-resolution multispectral acquisition. As hyperspectral cubes contain spectra for each pixel; comparison should be done between the theoretical values from the spectrophotometer and the reconstructed spectrum. To evaluate the performance of reconstruction, we used the Goodness of Fit Coefficient (GFC) and Root Mean Squared Error (RMSE). To validate reconstruction, the set of 8 colour patches reconstructed by our MSI system and the one recorded by the spectrophotometer were compared. The average GFC was 0.9990 (standard deviation = 0.0010) and the average RMSE is 0.2167 (standard deviation = 0.064).Keywords: multispectral imaging, reflectance cube, spectral reconstruction, artificial neural network
Procedia PDF Downloads 3223272 Artificial Neural Network Based Model for Detecting Attacks in Smart Grid Cloud
Authors: Sandeep Mehmi, Harsh Verma, A. L. Sangal
Abstract:
Ever since the idea of using computing services as commodity that can be delivered like other utilities e.g. electric and telephone has been floated, the scientific fraternity has diverted their research towards a new area called utility computing. New paradigms like cluster computing and grid computing came into existence while edging closer to utility computing. With the advent of internet the demand of anytime, anywhere access of the resources that could be provisioned dynamically as a service, gave rise to the next generation computing paradigm known as cloud computing. Today, cloud computing has become one of the most aggressively growing computer paradigm, resulting in growing rate of applications in area of IT outsourcing. Besides catering the computational and storage demands, cloud computing has economically benefitted almost all the fields, education, research, entertainment, medical, banking, military operations, weather forecasting, business and finance to name a few. Smart grid is another discipline that direly needs to be benefitted from the cloud computing advantages. Smart grid system is a new technology that has revolutionized the power sector by automating the transmission and distribution system and integration of smart devices. Cloud based smart grid can fulfill the storage requirement of unstructured and uncorrelated data generated by smart sensors as well as computational needs for self-healing, load balancing and demand response features. But, security issues such as confidentiality, integrity, availability, accountability and privacy need to be resolved for the development of smart grid cloud. In recent years, a number of intrusion prevention techniques have been proposed in the cloud, but hackers/intruders still manage to bypass the security of the cloud. Therefore, precise intrusion detection systems need to be developed in order to secure the critical information infrastructure like smart grid cloud. Considering the success of artificial neural networks in building robust intrusion detection, this research proposes an artificial neural network based model for detecting attacks in smart grid cloud.Keywords: artificial neural networks, cloud computing, intrusion detection systems, security issues, smart grid
Procedia PDF Downloads 3183271 Next-Gen Solutions: How Generative AI Will Reshape Businesses
Authors: Aishwarya Rai
Abstract:
This study explores the transformative influence of generative AI on startups, businesses, and industries. We will explore how large businesses can benefit in the area of customer operations, where AI-powered chatbots can improve self-service and agent effectiveness, greatly increasing efficiency. In marketing and sales, generative AI could transform businesses by automating content development, data utilization, and personalization, resulting in a substantial increase in marketing and sales productivity. In software engineering-focused startups, generative AI can streamline activities, significantly impacting coding processes and work experiences. It can be extremely useful in product R&D for market analysis, virtual design, simulations, and test preparation, altering old workflows and increasing efficiency. Zooming into the retail and CPG industry, industry findings suggest a 1-2% increase in annual revenues, equating to $400 billion to $660 billion. By automating customer service, marketing, sales, and supply chain management, generative AI can streamline operations, optimizing personalized offerings and presenting itself as a disruptive force. While celebrating economic potential, we acknowledge challenges like external inference and adversarial attacks. Human involvement remains crucial for quality control and security in the era of generative AI-driven transformative innovation. This talk provides a comprehensive exploration of generative AI's pivotal role in reshaping businesses, recognizing its strategic impact on customer interactions, productivity, and operational efficiency.Keywords: generative AI, digital transformation, LLM, artificial intelligence, startups, businesses
Procedia PDF Downloads 763270 XAI Implemented Prognostic Framework: Condition Monitoring and Alert System Based on RUL and Sensory Data
Authors: Faruk Ozdemir, Roy Kalawsky, Peter Hubbard
Abstract:
Accurate estimation of RUL provides a basis for effective predictive maintenance, reducing unexpected downtime for industrial equipment. However, while models such as the Random Forest have effective predictive capabilities, they are the so-called ‘black box’ models, where interpretability is at a threshold to make critical diagnostic decisions involved in industries related to aviation. The purpose of this work is to present a prognostic framework that embeds Explainable Artificial Intelligence (XAI) techniques in order to provide essential transparency in Machine Learning methods' decision-making mechanisms based on sensor data, with the objective of procuring actionable insights for the aviation industry. Sensor readings have been gathered from critical equipment such as turbofan jet engine and landing gear, and the prediction of the RUL is done by a Random Forest model. It involves steps such as data gathering, feature engineering, model training, and evaluation. These critical components’ datasets are independently trained and evaluated by the models. While suitable predictions are served, their performance metrics are reasonably good; such complex models, however obscure reasoning for the predictions made by them and may even undermine the confidence of the decision-maker or the maintenance teams. This is followed by global explanations using SHAP and local explanations using LIME in the second phase to bridge the gap in reliability within industrial contexts. These tools analyze model decisions, highlighting feature importance and explaining how each input variable affects the output. This dual approach offers a general comprehension of the overall model behavior and detailed insight into specific predictions. The proposed framework, in its third component, incorporates the techniques of causal analysis in the form of Granger causality tests in order to move beyond correlation toward causation. This will not only allow the model to predict failures but also present reasons, from the key sensor features linked to possible failure mechanisms to relevant personnel. The causality between sensor behaviors and equipment failures creates much value for maintenance teams due to better root cause identification and effective preventive measures. This step contributes to the system being more explainable. Surrogate Several simple models, including Decision Trees and Linear Models, can be used in yet another stage to approximately represent the complex Random Forest model. These simpler models act as backups, replicating important jobs of the original model's behavior. If the feature explanations obtained from the surrogate model are cross-validated with the primary model, the insights derived would be more reliable and provide an intuitive sense of how the input variables affect the predictions. We then create an iterative explainable feedback loop, where the knowledge learned from the explainability methods feeds back into the training of the models. This feeds into a cycle of continuous improvement both in model accuracy and interpretability over time. By systematically integrating new findings, the model is expected to adapt to changed conditions and further develop its prognosis capability. These components are then presented to the decision-makers through the development of a fully transparent condition monitoring and alert system. The system provides a holistic tool for maintenance operations by leveraging RUL predictions, feature importance scores, persistent sensor threshold values, and autonomous alert mechanisms. Since the system will provide explanations for the predictions given, along with active alerts, the maintenance personnel can make informed decisions on their end regarding correct interventions to extend the life of the critical machinery.Keywords: predictive maintenance, explainable artificial intelligence, prognostic, RUL, machine learning, turbofan engines, C-MAPSS dataset
Procedia PDF Downloads 63269 Thermal Decomposition Behaviors of Hexafluoroethane (C2F6) Using Zeolite/Calcium Oxide Mixtures
Authors: Kazunori Takai, Weng Kaiwei, Sadao Araki, Hideki Yamamoto
Abstract:
HFC and PFC gases have been commonly and widely used as refrigerant of air conditioner and as etching agent of semiconductor manufacturing process, because of their higher heat of vaporization and chemical stability. On the other hand, HFCs and PFCs gases have the high global warming effect on the earth. Therefore, we have to be decomposed these gases emitted from chemical apparatus like as refrigerator. Until now, disposal of these gases were carried out by using combustion method like as Rotary kiln treatment mainly. However, this treatment needs extremely high temperature over 1000 °C. In the recent year, in order to reduce the energy consumption, a hydrolytic decomposition method using catalyst and plasma decomposition treatment have been attracted much attention as a new disposal treatment. However, the decomposition of fluorine-containing gases under the wet condition is not able to avoid the generation of hydrofluoric acid. Hydrofluoric acid is corrosive gas and it deteriorates catalysts in the decomposition process. Moreover, an additional process for the neutralization of hydrofluoric acid is also indispensable. In this study, the decomposition of C2F6 using zeolite and zeolite/CaO mixture as reactant was evaluated in the dry condition at 923 K. The effect of the chemical structure of zeolite on the decomposition reaction was confirmed by using H-Y, H-Beta, H-MOR and H-ZSM-5. The formation of CaF2 in zeolite/CaO mixtures after the decomposition reaction was confirmed by XRD measurements. The decomposition of C2F6 using zeolite as reactant showed the closely similar behaviors regardless the type of zeolite (MOR, Y, ZSM-5, Beta type). There was no difference of XRD patterns of each zeolite before and after reaction. On the other hand, the difference in the C2F6 decomposition for each zeolite/CaO mixtures was observed. These results suggested that the rate-determining process for the C2F6 decomposition on zeolite alone is the removal of fluorine from reactive site. In other words, the C2F6 decomposition for the zeolite/CaO improved compared with that for the zeolite alone by the removal of the fluorite from reactive site. HMOR/CaO showed 100% of the decomposition for 3.5 h and significantly improved from zeolite alone. On the other hand, Y type zeolite showed no improvement, that is, the almost same value of Y type zeolite alone. The descending order of C2F6 decomposition was MOR, ZSM-5, beta and Y type zeolite. This order is similar to the acid strength characterized by NH3-TPD. Hence, it is considered that the C-F bond cleavage is closely related to the acid strength.Keywords: hexafluoroethane, zeolite, calcium oxide, decomposition
Procedia PDF Downloads 4823268 Numerical Analysis of Wire Laser Additive Manufacturing for Low Carbon Steels+
Authors: Juan Manuel Martinez Alvarez, Michele Chiumenti
Abstract:
This work explores the benefit of the thermo-metallurgical simulation to tackle the Wire Laser Additive Manufacturing (WLAM) of low-carbon steel components. The Finite Element Analysis is calibrated by process monitoring via thermal imaging and thermocouples measurements, to study the complex thermo-metallurgical behavior inherent to the WLAM process of low carbon steel parts.A critical aspect is the analysis of the heterogeneity in the resulting microstructure. This heterogeneity depends on both the thermal history and the residual stresses experienced during the WLAM process. Because of low carbon grades are highly sensitive to quenching, a high-gradient microstructure often arises due to the layer-by-layer metal deposition in WLAM. The different phases have been identified by scanning electron microscope. A clear influence of the heterogeneities on the final mechanical performance has been established by the subsequent mechanical characterization. The thermo-metallurgical analysis has been used to determine the actual thermal history and the corresponding thermal gradients during the printing process. The correlation between the thermos-mechanical evolution, the printing parameters and scanning sequence has been established. Therefore, an enhanced printing strategy, including optimized process window has been used to minimize the microstructure heterogeneity at ArcelorMittal.Keywords: additive manufacturing, numerical simulation, metallurgy, steel
Procedia PDF Downloads 713267 Influence of Temperature on Properties of MOSFETs
Authors: Azizi Cherifa, O. Benzaoui
Abstract:
The thermal aspects in the design of power circuits often deserve as much attention as pure electric components aspects as the operating temperature has a direct influence on their static and dynamic characteristics. MOSFET is fundamental in the circuits, it is the most widely used device in the current production of semiconductor components using their honorable performance. The aim of this contribution is devoted to the effect of the temperature on the properties of MOSFETs. The study enables us to calculate the drain current as function of bias in both linear and saturated modes. The effect of temperature is evaluated using a numerical simulation, using the laws of mobility and saturation velocity of carriers as a function of temperature.Keywords: temperature, MOSFET, mobility, transistor
Procedia PDF Downloads 3463266 Intelligent Software Architecture and Automatic Re-Architecting Based on Machine Learning
Authors: Gebremeskel Hagos Gebremedhin, Feng Chong, Heyan Huang
Abstract:
Software system is the combination of architecture and organized components to accomplish a specific function or set of functions. A good software architecture facilitates application system development, promotes achievement of functional requirements, and supports system reconfiguration. We describe three studies demonstrating the utility of our architecture in the subdomain of mobile office robots and identify software engineering principles embodied in the architecture. The main aim of this paper is to analyze prove architecture design and automatic re-architecting using machine learning. Intelligence software architecture and automatic re-architecting process is reorganizing in to more suitable one of the software organizational structure system using the user access dataset for creating relationship among the components of the system. The 3-step approach of data mining was used to analyze effective recovery, transformation and implantation with the use of clustering algorithm. Therefore, automatic re-architecting without changing the source code is possible to solve the software complexity problem and system software reuse.Keywords: intelligence, software architecture, re-architecting, software reuse, High level design
Procedia PDF Downloads 1193265 Deep Learning-Based Object Detection on Low Quality Images: A Case Study of Real-Time Traffic Monitoring
Authors: Jean-Francois Rajotte, Martin Sotir, Frank Gouineau
Abstract:
The installation and management of traffic monitoring devices can be costly from both a financial and resource point of view. It is therefore important to take advantage of in-place infrastructures to extract the most information. Here we show how low-quality urban road traffic images from cameras already available in many cities (such as Montreal, Vancouver, and Toronto) can be used to estimate traffic flow. To this end, we use a pre-trained neural network, developed for object detection, to count vehicles within images. We then compare the results with human annotations gathered through crowdsourcing campaigns. We use this comparison to assess performance and calibrate the neural network annotations. As a use case, we consider six months of continuous monitoring over hundreds of cameras installed in the city of Montreal. We compare the results with city-provided manual traffic counting performed in similar conditions at the same location. The good performance of our system allows us to consider applications which can monitor the traffic conditions in near real-time, making the counting usable for traffic-related services. Furthermore, the resulting annotations pave the way for building a historical vehicle counting dataset to be used for analysing the impact of road traffic on many city-related issues, such as urban planning, security, and pollution.Keywords: traffic monitoring, deep learning, image annotation, vehicles, roads, artificial intelligence, real-time systems
Procedia PDF Downloads 2003264 Supply Chain Optimization for Silica Sand in a Glass Manufacturing Company
Authors: Ramon Erasmo Verdin Rodriguez
Abstract:
Many has been the ways that historically the managers and gurus has been trying to get closer to the perfect supply chain, but since this topic is so vast and very complex the bigger the companies are, the duty has not been certainly easy. On this research, you are going to see thru the entrails of the logistics that happens at a glass manufacturing company with the number one raw material of the process that is the silica sand. After a very quick passage thru the supply chain, this document is going to focus on the way that raw materials flow thru the system, so after that, an analysis and research can take place to improve the logistics. Thru Operations Research techniques, it will be analyzed the current scheme of distribution and inventories of raw materials at a glass company’s plants, so after a mathematical conceptualization process, the supply chain could be optimized with the purpose of reducing the uncertainty of supply and obtaining an economic benefit at the very end of this research.Keywords: inventory management, operations research, optimization, supply chain
Procedia PDF Downloads 326