Search results for: deep drawing process
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17365

Search results for: deep drawing process

16285 Big Data in Telecom Industry: Effective Predictive Techniques on Call Detail Records

Authors: Sara ElElimy, Samir Moustafa

Abstract:

Mobile network operators start to face many challenges in the digital era, especially with high demands from customers. Since mobile network operators are considered a source of big data, traditional techniques are not effective with new era of big data, Internet of things (IoT) and 5G; as a result, handling effectively different big datasets becomes a vital task for operators with the continuous growth of data and moving from long term evolution (LTE) to 5G. So, there is an urgent need for effective Big data analytics to predict future demands, traffic, and network performance to full fill the requirements of the fifth generation of mobile network technology. In this paper, we introduce data science techniques using machine learning and deep learning algorithms: the autoregressive integrated moving average (ARIMA), Bayesian-based curve fitting, and recurrent neural network (RNN) are employed for a data-driven application to mobile network operators. The main framework included in models are identification parameters of each model, estimation, prediction, and final data-driven application of this prediction from business and network performance applications. These models are applied to Telecom Italia Big Data challenge call detail records (CDRs) datasets. The performance of these models is found out using a specific well-known evaluation criteria shows that ARIMA (machine learning-based model) is more accurate as a predictive model in such a dataset than the RNN (deep learning model).

Keywords: big data analytics, machine learning, CDRs, 5G

Procedia PDF Downloads 137
16284 Mechanical and Optical Properties of Doped Aluminum Nitride Thin Films

Authors: Padmalochan Panda, R. Ramaseshan

Abstract:

Aluminum nitride (AlN) is a potential candidate for semiconductor industry due to its wide band gap (6.2 eV), high thermal conductivity and low thermal coefficient of expansion. A-plane oriented AlN film finds an important role in deep UV-LED with higher isotropic light extraction efficiency. Also, Cr-doped AlN films exhibit dilute magnetic semiconductor property with high Curie temperature (300 K), and thus compatible with modern day microelectronics. In this work, highly a-axis oriented wurtzite AlN and Al1-xMxN (M = Cr, Ti) films have synthesized by reactive co-sputtering technique at different concentration. Crystal structure of these films is studied by Grazing incidence X-ray diffraction (GIXRD) and Transmission electron microscopy (TEM). Identification of binding energy and concentration (x) in these films is carried out by X-ray photoelectron spectroscopy (XPS). Local crystal structure around the Cr and Ti atom of these films are investigated by X-ray absorption spectroscopy (XAS). It is found that Cr and Ti replace the Al atom in AlN lattice and the bond lengths in first and second coordination sphere with N and Al, respectively, decrease concerning doping concentration due to strong p-d hybridization. The nano-indentation hardness of Cr and Ti-doped AlN films seems to increase from 17.5 GPa (AlN) to around 23 and 27.5 GPa, respectively. An-isotropic optical properties of these films are studied by the Spectroscopic Ellipsometry technique. Refractive index and extinction coefficient of these films are enhanced in normal dispersion region as compared to the parent AlN film. The optical band gap energies also seem to vary between deep UV to UV regions with the addition of Cr, thus by bringing out the usefulness of these films in the area of optoelectronic device applications.

Keywords: ellipsometry, GIXRD, hardness, XAS

Procedia PDF Downloads 109
16283 Automated CNC Part Programming and Process Planning for Turned Components

Authors: Radhey Sham Rajoria

Abstract:

Pressure to increase the competitiveness in the manufacturing sector and for the survival in the market has led to the development of machining centres, which enhance productivity, improve quality, shorten the lead time, and reduce the manufacturing cost. With the innovation of machining centres in the manufacturing sector the production lines have been replaced by these machining centers, having the ability to machine various processes and multiple tooling with automatic tool changer (ATC) for the same part. Also the process plans can be easily generated for complex components. Some means are required to utilize the machining center at its best. The present work is concentrated on the automated part program generation, and in turn automated process plan generation for the turned components on Denford “MIRAC” 8 stations ATC lathe machining centre. A package in C++ on DOS platform is developed which generates the complete CNC part program, process plan and process sequence for the turned components. The input to this system is in the form of a blueprint in graphical format with machining parameters and variables, and the output is the CNC part program which is stored in a .mir file, ready for execution on the machining centre.

Keywords: CNC, MIRAC, ATC, process planning

Procedia PDF Downloads 264
16282 Optimized Deep Learning-Based Facial Emotion Recognition System

Authors: Erick C. Valverde, Wansu Lim

Abstract:

Facial emotion recognition (FER) system has been recently developed for more advanced computer vision applications. The ability to identify human emotions would enable smart healthcare facility to diagnose mental health illnesses (e.g., depression and stress) as well as better human social interactions with smart technologies. The FER system involves two steps: 1) face detection task and 2) facial emotion recognition task. It classifies the human expression in various categories such as angry, disgust, fear, happy, sad, surprise, and neutral. This system requires intensive research to address issues with human diversity, various unique human expressions, and variety of human facial features due to age differences. These issues generally affect the ability of the FER system to detect human emotions with high accuracy. Early stage of FER systems used simple supervised classification task algorithms like K-nearest neighbors (KNN) and artificial neural networks (ANN). These conventional FER systems have issues with low accuracy due to its inefficiency to extract significant features of several human emotions. To increase the accuracy of FER systems, deep learning (DL)-based methods, like convolutional neural networks (CNN), are proposed. These methods can find more complex features in the human face by means of the deeper connections within its architectures. However, the inference speed and computational costs of a DL-based FER system is often disregarded in exchange for higher accuracy results. To cope with this drawback, an optimized DL-based FER system is proposed in this study.An extreme version of Inception V3, known as Xception model, is leveraged by applying different network optimization methods. Specifically, network pruning and quantization are used to enable lower computational costs and reduce memory usage, respectively. To support low resource requirements, a 68-landmark face detector from Dlib is used in the early step of the FER system.Furthermore, a DL compiler is utilized to incorporate advanced optimization techniques to the Xception model to improve the inference speed of the FER system. In comparison to VGG-Net and ResNet50, the proposed optimized DL-based FER system experimentally demonstrates the objectives of the network optimization methods used. As a result, the proposed approach can be used to create an efficient and real-time FER system.

Keywords: deep learning, face detection, facial emotion recognition, network optimization methods

Procedia PDF Downloads 117
16281 Application of Lean Six Sigma Tools to Minimize Time and Cost in Furniture Packaging

Authors: Suleiman Obeidat, Nabeel Mandahawi

Abstract:

In this work, the packaging process for a move is improved. The customers of this move need their household stuff to be moved from their current house to the new one with minimum damage, in an organized manner, on time and with the minimum cost. Our goal was to improve the process between 10% and 20% time efficiency, 90% reduction in damaged parts and an acceptable improvement in the cost of the total move process. The expected ROI was 833%. Many improvement techniques have been used in terms of the way the boxes are prepared, their preparation cost, packing the goods, labeling them and moving them to a place for moving out. DMAIC technique is used in this work: SIPOC diagram, value stream map of “As Is” process, Root Cause Analysis, Maps of “Future State” and “Ideal State” and an Improvement Plan. A value of ROI=624% is obtained which is lower than the expected value of 833%. The work explains the techniques of improvement and the deficiencies in the old process.

Keywords: packaging, lean tools, six sigma, DMAIC methodology, SIPOC

Procedia PDF Downloads 424
16280 Trajectory Optimization for Autonomous Deep Space Missions

Authors: Anne Schattel, Mitja Echim, Christof Büskens

Abstract:

Trajectory planning for deep space missions has become a recent topic of great interest. Flying to space objects like asteroids provides two main challenges. One is to find rare earth elements, the other to gain scientific knowledge of the origin of the world. Due to the enormous spatial distances such explorer missions have to be performed unmanned and autonomously. The mathematical field of optimization and optimal control can be used to realize autonomous missions while protecting recourses and making them safer. The resulting algorithms may be applied to other, earth-bound applications like e.g. deep sea navigation and autonomous driving as well. The project KaNaRiA ('Kognitionsbasierte, autonome Navigation am Beispiel des Ressourcenabbaus im All') investigates the possibilities of cognitive autonomous navigation on the example of an asteroid mining mission, including the cruise phase and approach as well as the asteroid rendezvous, landing and surface exploration. To verify and test all methods an interactive, real-time capable simulation using virtual reality is developed under KaNaRiA. This paper focuses on the specific challenge of the guidance during the cruise phase of the spacecraft, i.e. trajectory optimization and optimal control, including first solutions and results. In principle there exist two ways to solve optimal control problems (OCPs), the so called indirect and direct methods. The indirect methods are being studied since several decades and their usage needs advanced skills regarding optimal control theory. The main idea of direct approaches, also known as transcription techniques, is to transform the infinite-dimensional OCP into a finite-dimensional non-linear optimization problem (NLP) via discretization of states and controls. These direct methods are applied in this paper. The resulting high dimensional NLP with constraints can be solved efficiently by special NLP methods, e.g. sequential quadratic programming (SQP) or interior point methods (IP). The movement of the spacecraft due to gravitational influences of the sun and other planets, as well as the thrust commands, is described through ordinary differential equations (ODEs). The competitive mission aims like short flight times and low energy consumption are considered by using a multi-criteria objective function. The resulting non-linear high-dimensional optimization problems are solved by using the software package WORHP ('We Optimize Really Huge Problems'), a software routine combining SQP at an outer level and IP to solve underlying quadratic subproblems. An application-adapted model of impulsive thrusting, as well as a model of an electrically powered spacecraft propulsion system, is introduced. Different priorities and possibilities of a space mission regarding energy cost and flight time duration are investigated by choosing different weighting factors for the multi-criteria objective function. Varying mission trajectories are analyzed and compared, both aiming at different destination asteroids and using different propulsion systems. For the transcription, the robust method of full discretization is used. The results strengthen the need for trajectory optimization as a foundation for autonomous decision making during deep space missions. Simultaneously they show the enormous increase in possibilities for flight maneuvers by being able to consider different and opposite mission objectives.

Keywords: deep space navigation, guidance, multi-objective, non-linear optimization, optimal control, trajectory planning.

Procedia PDF Downloads 411
16279 Influence of Thermal History on the Undrained Shear Strength of the Bentonite-Sand Mixture

Authors: K. Ravi, Sabu Subhash

Abstract:

Densely compacted bentonite or bentonite–sand mixture has been identified as a suitable buffer in the deep geological repository (DGR) for the safe disposal of high-level nuclear waste (HLW) due to its favourable physicochemical and hydro-mechanical properties. The addition of sand to the bentonite enhances the thermal conductivity and compaction properties and reduces the drying shrinkage of the buffer material. The buffer material may undergo cyclic wetting and drying upon ingress of groundwater from the surrounding rock mass and from evaporation due to high temperature (50–210 °C) derived from the waste canister. The cycles of changes in temperature may result in thermal history, and the hydro-mechanical properties of the buffer material may be affected. This paper examines the influence of thermal history on the undrained shear strength of bentonite and bentonite-sand mixture. Bentonite from Rajasthan state and sand from the Assam state of India are used in this study. The undrained shear strength values are obtained by conducting unconfined compressive strength (UCS) tests on cylindrical specimens (dry densities 1.30 and 1.5 Mg/m3) of bentonite and bentonite-sand mixture consisting of 30 % bentonite+ 70 % sand. The specimens are preheated at temperatures varying from 50-150 °C for one, two and four hours in hot air oven. The results indicate that the undrained shear strength is increased by the thermal history of the buffer material. The specimens of bentonite-sand mixture exhibited more increase in strength compared to the pure bentonite specimens. This indicates that the sand content of the mixture plays a vital role in taking the thermal stresses of the bentonite buffer in DGR conditions.

Keywords: bentonite, deep geological repository, thermal history, undrained shear strength

Procedia PDF Downloads 341
16278 Cognitive Model of Analogy Based on Operation of the Brain Cells: Glial, Axons and Neurons

Authors: Ozgu Hafizoglu

Abstract:

Analogy is an essential tool of human cognition that enables connecting diffuse and diverse systems with attributional, deep structural, casual relations that are essential to learning, to innovation in artificial worlds, and to discovery in science. Cognitive Model of Analogy (CMA) leads and creates information pattern transfer within and between domains and disciplines in science. This paper demonstrates the Cognitive Model of Analogy (CMA) as an evolutionary approach to scientific research. The model puts forward the challenges of deep uncertainty about the future, emphasizing the need for flexibility of the system in order to enable reasoning methodology to adapt to changing conditions. In this paper, the model of analogical reasoning is created based on brain cells, their fractal, and operational forms within the system itself. Visualization techniques are used to show correspondences. Distinct phases of the problem-solving processes are divided thusly: encoding, mapping, inference, and response. The system is revealed relevant to brain activation considering each of these phases with an emphasis on achieving a better visualization of the brain cells: glial cells, axons, axon terminals, and neurons, relative to matching conditions of analogical reasoning and relational information. It’s found that encoding, mapping, inference, and response processes in four-term analogical reasoning are corresponding with the fractal and operational forms of brain cells: glial, axons, and neurons.

Keywords: analogy, analogical reasoning, cognitive model, brain and glials

Procedia PDF Downloads 181
16277 Sentiment Analysis of Chinese Microblog Comments: Comparison between Support Vector Machine and Long Short-Term Memory

Authors: Xu Jiaqiao

Abstract:

Text sentiment analysis is an important branch of natural language processing. This technology is widely used in public opinion analysis and web surfing recommendations. At present, the mainstream sentiment analysis methods include three parts: sentiment analysis based on a sentiment dictionary, based on traditional machine learning, and based on deep learning. This paper mainly analyzes and compares the advantages and disadvantages of the SVM method of traditional machine learning and the Long Short-term Memory (LSTM) method of deep learning in the field of Chinese sentiment analysis, using Chinese comments on Sina Microblog as the data set. Firstly, this paper classifies and adds labels to the original comment dataset obtained by the web crawler, and then uses Jieba word segmentation to classify the original dataset and remove stop words. After that, this paper extracts text feature vectors and builds document word vectors to facilitate the training of the model. Finally, SVM and LSTM models are trained respectively. After accuracy calculation, it can be obtained that the accuracy of the LSTM model is 85.80%, while the accuracy of SVM is 91.07%. But at the same time, LSTM operation only needs 2.57 seconds, SVM model needs 6.06 seconds. Therefore, this paper concludes that: compared with the SVM model, the LSTM model is worse in accuracy but faster in processing speed.

Keywords: sentiment analysis, support vector machine, long short-term memory, Chinese microblog comments

Procedia PDF Downloads 89
16276 Accelerated Aging of Photopolymeric Material Used in Flexography

Authors: S. Mahovic Poljacek, T. Tomasegovic, T. Cigula, D. Donevski, R. Szentgyörgyvölgyi, S. Jakovljevic

Abstract:

In this paper, a degradation of the photopolymeric material (PhPM), used as printing plate in the flexography reproduction technique, caused by accelerated aging has been observed. Since the basis process for production of printing plates from the PhPM is a radical cross-linking process caused by exposing to UV wavelengths, the assumption was that improper storage or irregular handling of the PhPM plate can change the surface and structure characteristics of the plates. Results have shown that the aging process causes degradation in the structure and changes in the surface of the PhPM printing plate.

Keywords: aging process, artificial treatment, flexography, photopolymeric material (PhPM)

Procedia PDF Downloads 345
16275 Islamic Banking Recovery Process and Its Parameters: A Practitioner’s Viewpoints in the Light of Humanising Financial Services

Authors: Muhammad Izzam Bin Mohd Khazar, Nur Adibah Binti Zainudin

Abstract:

Islamic banking as one of the financial institutions is highly required to maintain a prudent approach to ensure that any financing given is able to generate income to their respective shareholders. As the default payment of customers is probably occurred in the financing, having a prudent approach in the recovery process is a must to ensure that financing losses are within acceptable limits. The objective of this research is to provide the best practice of recovery which is anticipated to benefit both bank and customers. This study will address arising issue on the current practice of recovery process and followed by providing humanising recovery solutions in the light of the Maqasid Shariah. The study identified main issues pertaining to Islamic recovery process which can be categorized into knowledge crisis, process issues, specific treatment cases and system issues. Knowledge crisis is related to direct parties including judges, solicitors and salesperson, while the recovery process issues include the process of issuance of reminder, foreclosure and repossession of asset. Furthermore, special treatment for particular cases also should be observed since different contracts in Islamic banking products will need different treatment. Finally, issues in the system used in the recovery process are still unresolved since the existing technology is still young in this area to embraced Islamic finance requirements and nature of calculation. In order to humanize the financial services in Islamic banking recovery process, we have highlighted four main recommendation to be implemented by Islamic Financial Institutions namely; 1) early deterrent by improving the awareness, 2) improvement of the internal process, 3) reward mechanism, and 4) creative penalty to provide awareness to all stakeholders.

Keywords: humanizing financial services, Islamic Finance, Maqasid Syariah, recovery process

Procedia PDF Downloads 195
16274 Dissipation of Tebuconazole in Cropland Soils as Affected by Soil Factors

Authors: Bipul Behari Saha, Sunil Kumar Singh, P. Padmaja, Kamlesh Vishwakarma

Abstract:

Dissipation study of tebuconazole in alluvial, black and deep-black clayey soils collected from paddy, mango and peanut cropland of tropical agro-climatic zone of India at three concentration levels were carried out for monitoring the water contamination through persisted residual toxicity. The soil-slurry samples were analyzed by capillary GC-NPD methods followed by ultrasound-assisted extraction (UAE) technique and cleanup process. An excellent linear relationship between peak area and concentration obtained in the range 1 to 50 μgkg-1. The detection (S/N, 3 ± 0.5) and quantification (S/N, 7.5 ± 2.5) limits were 3 and 10 μgkg-1 respectively. Well spiked recoveries were achieved from 96.28 to 99.33 % at levels 5 and 20 μgkg-1 and method precision (% RSD) was ≤ 5%. The soils dissipation of tebuconazole was fitted in first order kinetic-model with half-life between 34.48 to 48.13 days. The soil organic-carbon (SOC) content correlated well with the dissipation rate constants (DRC) of the fungicide Tebuconazole. An increase in the SOC content resulted in faster dissipation. The results indicate that the soil organic carbon and tebuconazole concentrations plays dominant role in dissipation processes. The initial concentration illustrated that the degradation rate of tebuconazole in soils was concentration dependent.

Keywords: cropland soil, dissipation, laboratory incubation, tebuconazole

Procedia PDF Downloads 248
16273 The Development of an Accident Causation Model Specific to Agriculture: The Irish Farm Accident Causation Model

Authors: Carolyn Scott, Rachel Nugent

Abstract:

The agricultural industry in Ireland and worldwide is one of the most dangerous occupations with respect to occupational health and safety accidents and fatalities. Many accident causation models have been developed in safety research to understand the underlying and contributory factors that lead to the occurrence of an accident. Due to the uniqueness of the agricultural sector, current accident causation theories cannot be applied. This paper presents an accident causation model named the Irish Farm Accident Causation Model (IFACM) which has been specifically tailored to the needs of Irish farms. The IFACM is a theoretical and practical model of accident causation that arranges the causal factors into a graphic representation of originating, shaping, and contributory factors that lead to accidents when unsafe acts and conditions are created that are not rectified by control measures. Causes of farm accidents were assimilated by means of a thorough literature review and were collated to form a graphical representation of the underlying causes of a farm accident. The IFACM was validated retrospectively through case study analysis and peer review. Participants in the case study (n=10) identified causes that led to a farm accident in which they were involved. A root cause analysis was conducted to understand the contributory factors surrounding the farm accident, traced back to the ‘root cause’. Experts relevant to farm safety accident causation in the agricultural industry have peer reviewed the IFACM. The accident causation process is complex. Accident prevention requires a comprehensive understanding of this complex process because to prevent the occurrence of accidents, the causes of accidents must be known. There is little research on the key causes and contributory factors of unsafe behaviours and accidents on Irish farms. The focus of this research is to gain a deep understanding of the causality of accidents on Irish farms. The results suggest that the IFACM framework is helpful for the analysis of the causes of accidents within the agricultural industry in Ireland. The research also suggests that there may be international applicability if further research is carried out. Furthermore, significant learning can be obtained from considering the underlying causes of accidents.

Keywords: farm safety, farm accidents, accident causation, root cause analysis

Procedia PDF Downloads 75
16272 Development of Researcher Knowledge in Mathematics Education: Towards a Confluence Framework

Authors: Igor Kontorovich, Rina Zazkis

Abstract:

We present a framework of researcher knowledge ‎development in conducting a study in mathematics education. The key ‎components of the framework are: knowledge germane to conducting a ‎particular study, processes of knowledge accumulation, and catalyzing ‎filters that influence a researcher decision making. The components of ‎the framework originated from a confluence between constructs and ‎theories in Mathematics Education, Higher Education and Sociology. ‎Drawing on a self-reflective interview with a leading researcher in ‎mathematics education, professor Michèle Artigue, we illustrate how ‎the framework can be utilized in data analysis. Criteria for framework ‎evaluation are discussed. ‎

Keywords: community of practice, knowledge development, mathematics education research, researcher knowledge

Procedia PDF Downloads 503
16271 A Comparison between Fuzzy Analytic Hierarchy Process and Fuzzy Analytic Network Process for Rationality Evaluation of Land Use Planning Locations in Vietnam

Authors: X. L. Nguyen, T. Y. Chou, F. Y. Min, F. C. Lin, T. V. Hoang, Y. M. Huang

Abstract:

In Vietnam, land use planning is utilized as an efficient tool for the local government to adjust land use. However, planned locations are facing disapproval from people who live near these planned sites because of environmental problems. The selection of these locations is normally based on the subjective opinion of decision-makers and is not supported by any scientific methods. Many researchers have applied Multi-Criteria Analysis (MCA) methods in which Analytic Hierarchy Process (AHP) is the most popular techniques in combination with Fuzzy set theory for the subject of rationality assessment of land use planning locations. In this research, the Fuzzy set theory and Analytic Network Process (ANP) multi-criteria-based technique were used for the assessment process. The Fuzzy Analytic Hierarchy Process was also utilized, and the output results from two methods were compared to extract the differences. The 20 planned landfills in Hung Ha district, Thai Binh province, Vietnam was selected as a case study. The comparison results indicate that there are different between weights computed by AHP and ANP methods and the assessment outputs produced from these two methods also slight differences. After evaluation of existing planned sites, some potential locations were suggested to the local government for possibility of land use planning adjusts.

Keywords: Analytic Hierarchy Process, Analytic Network Process, Fuzzy set theory, land use planning

Procedia PDF Downloads 417
16270 In Search of Good Fortune: Individualization, Youth and the Spanish Labour Market within a Context of Crisis

Authors: Matthew Lee Turnbough

Abstract:

In 2007 Spain began to experience the effects of a deep economic crisis, which would generate a situation characterised by instability and uncertainty. This has been an obstacle, especially acute for the youth of this country seeking to enter the workforce. As a result of the impact of COVID-19, the youth in Spain are now suffering the effects of a new crisis that has deepened an already fragile labour environment. In this paper, we analyse the discourses that have emerged from a precarious labour market, specifically from two companies dedicated to operating job portals and job listings in Spain, Job Today, and CornerJob. These two start-up businesses have developed mobile applications geared towards young adults in search of employment in the service sector, two of the companies with the highest user rates in Spain. Utilizing a discourse analysis approach, we explore the impact of individualization and how the process of psychologization may contribute to an increasing reliance on individual solutions to social problems. As such, we seek to highlight the expectations and demands that are placed upon young workers and the type of subjectivity that this dynamic could foster, all this within an unstable framework seemingly marked by chance, a context which is key for the emergence of individualization. Furthermore, we consider the extent to which young adults incorporate these discourses and the strategies they employ basing our analysis on the VULSOCU (New Forms of Socio-Existential Vulnerability, Supports, and Care in Spain) research project, specifically the results of nineteen in-depth interviews and three discussion groups with young adults in this country. Consequently, we seek to elucidate the argumentative threads rooted in the process of individualization and underline the implications of this dynamic for the young worker and his/her labour insertion while also identifying manifestations of the goddess of fortune as a representation of chance in this context. Finally, we approach this panorama of social change in Spain from the perspective of the individuals or young adults who find themselves immersed in this transition from one crisis to another.

Keywords: chance, crisis, discourses, individualization, work, youth

Procedia PDF Downloads 110
16269 Business Process Management Maturity in Croatian Companies

Authors: V. Bosilj Vuksic

Abstract:

This paper aims to investigate business process management (BPM) maturity in Croatian companies. First, a brief literature review of the research field is given. Next, the results of empirical research are presented, analyzed and discussed. The results reveal that Croatian companies achieved the intermediate level of BPM maturity. The empirical evidence supports the proposed theoretical background. Furthermore, a case study approach was used to illustrate BPM adoption in a Croatian company at the upmost stage of BPM maturity. In practical terms, this case study identifies BPM maturity success factors that need to exist in order for a company to effectively adopt BPM.

Keywords: business process management, case study, Croatian companies, maturity, process performance index, questionnaire

Procedia PDF Downloads 226
16268 A Comparative Study on Deep Learning Models for Pneumonia Detection

Authors: Hichem Sassi

Abstract:

Pneumonia, being a respiratory infection, has garnered global attention due to its rapid transmission and relatively high mortality rates. Timely detection and treatment play a crucial role in significantly reducing mortality associated with pneumonia. Presently, X-ray diagnosis stands out as a reasonably effective method. However, the manual scrutiny of a patient's X-ray chest radiograph by a proficient practitioner usually requires 5 to 15 minutes. In situations where cases are concentrated, this places immense pressure on clinicians for timely diagnosis. Relying solely on the visual acumen of imaging doctors proves to be inefficient, particularly given the low speed of manual analysis. Therefore, the integration of artificial intelligence into the clinical image diagnosis of pneumonia becomes imperative. Additionally, AI recognition is notably rapid, with convolutional neural networks (CNNs) demonstrating superior performance compared to human counterparts in image identification tasks. To conduct our study, we utilized a dataset comprising chest X-ray images obtained from Kaggle, encompassing a total of 5216 training images and 624 test images, categorized into two classes: normal and pneumonia. Employing five mainstream network algorithms, we undertook a comprehensive analysis to classify these diseases within the dataset, subsequently comparing the results. The integration of artificial intelligence, particularly through improved network architectures, stands as a transformative step towards more efficient and accurate clinical diagnoses across various medical domains.

Keywords: deep learning, computer vision, pneumonia, models, comparative study

Procedia PDF Downloads 58
16267 A Key Parameter in Ocean Thermal Energy Conversion Plant Design and Operation

Authors: Yongjian Gu

Abstract:

Ocean thermal energy is one of the ocean energy sources. It is a renewable, sustainable, and green energy source. Ocean thermal energy conversion (OTEC) applies the ocean temperature gradient between the warmer surface seawater and the cooler deep seawater to run a heat engine and produce a useful power output. Unfortunately, the ocean temperature gradient is not big. Even in the tropical and equatorial regions, the surface water temperature can only reach up to 28oC and the deep water temperature can be as low as 4oC. The thermal efficiency of the OTEC plants, therefore, is low. In order to improve the plant thermal efficiency by using the limited ocean temperature gradient, some OTEC plants use the method of adding more equipment for better heat recovery, such as heat exchangers, pumps, etc. Obviously, the method will increase the plant's complexity and cost. The more important impact of the method is the additional equipment needs to consume power too, which may have an adverse effect on the plant net power output, in turn, the plant thermal efficiency. In the paper, the author first describes varied OTEC plants and the practice of using the method of adding more equipment for improving the plant's thermal efficiency. Then the author proposes a parameter, plant back works ratio ϕ, for measuring if the added equipment is appropriate for the plant thermal efficiency improvement. Finally, in the paper, the author presents examples to illustrate the application of the back work ratio ϕ as a key parameter in the OTEC plant design and operation.

Keywords: ocean thermal energy, ocean thermal energy conversion (OTEC), OTEC plant, plant back work ratio ϕ

Procedia PDF Downloads 192
16266 Adaptation of Climate Change and Building Resilience for Seaports: Empirical Study on Egyptian Mediterranean Seaports

Authors: Alsnosy Balbaa, Mohamed Nabil Elnabawi, Yasmin El Meladi

Abstract:

With the ever-growing concerns of climate change, Mediterranean ports, as vital economic and transport hubs face unique challenges in maintaining operations and infrastructure. This empirical study seeks to understand the current adaptations and preparedness levels of Egyptian Mediterranean ports against climate-induced disruptions. Drawing from a structured questionnaire, the research gathers insights on observed climate impacts, infrastructure adaptations, operational changes, and stakeholder engagement, aiming to shed light on the resilience of these ports in the face of a changing climate.

Keywords: climate, infrastructures, port, mediterranean

Procedia PDF Downloads 62
16265 Process Monitoring Based on Parameterless Self-Organizing Map

Authors: Young Jae Choung, Seoung Bum Kim

Abstract:

Statistical Process Control (SPC) is a popular technique for process monitoring. A widely used tool in SPC is a control chart, which is used to detect the abnormal status of a process and maintain the controlled status of the process. Traditional control charts, such as Hotelling’s T2 control chart, are effective techniques to detect abnormal observations and monitor processes. However, many complicated manufacturing systems exhibit nonlinearity because of the different demands of the market. In this case, the unregulated use of a traditional linear modeling approach may not be effective. In reality, many industrial processes contain the nonlinear and time-varying properties because of the fluctuation of process raw materials, slowing shift of the set points, aging of the main process components, seasoning effects, and catalyst deactivation. The use of traditional SPC techniques with time-varying data will degrade the performance of the monitoring scheme. To address these issues, in the present study, we propose a parameterless self-organizing map (PLSOM)-based control chart. The PLSOM-based control chart not only can manage a situation where the distribution or parameter of the target observations changes, but also address the nonlinearity of modern manufacturing systems. The control limits of the proposed PLSOM chart are established by estimating the empirical level of significance on the percentile using a bootstrap method. Experimental results with simulated data and actual process data from a thin-film transistor-liquid crystal display process demonstrated the effectiveness and usefulness of the proposed chart.

Keywords: control chart, parameter-less self-organizing map, self-organizing map, time-varying property

Procedia PDF Downloads 267
16264 A TgCNN-Based Surrogate Model for Subsurface Oil-Water Phase Flow under Multi-Well Conditions

Authors: Jian Li

Abstract:

The uncertainty quantification and inversion problems of subsurface oil-water phase flow usually require extensive repeated forward calculations for new runs with changed conditions. To reduce the computational time, various forms of surrogate models have been built. Related research shows that deep learning has emerged as an effective surrogate model, while most surrogate models with deep learning are purely data-driven, which always leads to poor robustness and abnormal results. To guarantee the model more consistent with the physical laws, a coupled theory-guided convolutional neural network (TgCNN) based surrogate model is built to facilitate computation efficiency under the premise of satisfactory accuracy. The model is a convolutional neural network based on multi-well reservoir simulation. The core notion of this proposed method is to bridge two separate blocks on top of an overall network. They underlie the TgCNN model in a coupled form, which reflects the coupling nature of pressure and water saturation in the two-phase flow equation. The model is driven by not only labeled data but also scientific theories, including governing equations, stochastic parameterization, boundary, and initial conditions, well conditions, and expert knowledge. The results show that the TgCNN-based surrogate model exhibits satisfactory accuracy and efficiency in subsurface oil-water phase flow under multi-well conditions.

Keywords: coupled theory-guided convolutional neural network, multi-well conditions, surrogate model, subsurface oil-water phase

Procedia PDF Downloads 81
16263 Nonlinear Model Predictive Control for Biodiesel Production via Transesterification

Authors: Juliette Harper, Yu Yang

Abstract:

Biofuels have gained significant attention recently due to the new regulations and agreements regarding fossil fuels and greenhouse gases being made by countries around the globe. One of the most common types of biofuels is biodiesel, primarily made via the transesterification reaction. We model this nonlinear process in MATLAB using the standard kinetic equations. Then, a nonlinear Model predictive control (NMPC) was developed to regulate this process due to its capability to handle process constraints. The feeding flow uncertainty and kinetic disturbances are further incorporated in the model to capture the real-world operating conditions. The simulation results will show that the proposed NMPC can guarantee the final composition of fatty acid methyl esters (FAME) above the target threshold with a high chance by adjusting the process temperature and flowrate. This research will allow further understanding of NMPC under uncertainties and how to design the computational strategy for larger process with more variables.

Keywords: NMPC, biodiesel, uncertainties, nonlinear, MATLAB

Procedia PDF Downloads 93
16262 A Deleuzean Feminist Analysis of the Everyday, Gendered Performances of Teen Femininity: A Case Study on Snaps and Selfies in East London

Authors: Christine Redmond

Abstract:

This paper contributes to research on gendered, digital identities by exploring how selfies offer scope for disrupting and moving through gendered and racial ideals of feminine beauty. The selfie involves self-presentation, filters, captions, hashtags, online publishing, likes and more, constituting the relationship between subjectivity, practice and social use of selfies a complex process. Employing qualitative research methods on youth selfies in the UK, the author investigates interdisciplinary entangling between studies of social media and fields within gender, media and cultural studies, providing a material discursive treatment of the selfie as an embodied practice. Drawing on data collected from focus groups with teenage girls in East London, the study explores how girls experience and relate to selfies and snaps in their everyday lives. The author’s Deleuzean feminist approach suggests that bodies and selfies are not individual, disembodied entities between which there is a mediating inter-action. Instead, bodies and selfies are positioned as entangled to a point where it becomes unclear as to where a selfie ends and a body begins. Recognising selfies not just as images but as material and social assemblages opens up possibilities for unpacking the selfie in ways that move beyond the representational model in some studies of socially mediated digital images. The study reveals how the selfie functions to enable moments of empowerment within limiting, dominant ideologies of Euro-centrism, patriarchy and heteronormativity.

Keywords: affect theory, femininity, gender, heteronormativity, photography, selfie, snapchat

Procedia PDF Downloads 242
16261 Establishing Sequence Stratigraphic Framework and Hydrocarbon Potential of the Late Cretaceous Strata: A Case Study from Central Indus Basin, Pakistan

Authors: Bilal Wadood, Suleman Khan, Sajjad Ahmed

Abstract:

The Late Cretaceous strata (Mughal Kot Formation) exposed in Central Indus Basin, Pakistan is evaluated for establishing sequence stratigraphic framework and potential of hydrocarbon accumulation. The petrographic studies and SEM analysis were carried out to infer the hydrocarbon potential of the rock unit. The petrographic details disclosed 4 microfacies including Pelagic Mudstone, OrbitoidalWackestone, Quartz Arenite, and Quartz Wacke. The lowermost part of the rock unit consists of OrbitoidalWackestone which shows deposition in the middle shelf environment. The Quartz Arenite and Quartz Wacke suggest deposition on the deep slope settings while the Pelagic Mudstone microfacies point toward deposition in the distal deep marine settings. Based on the facies stacking patterns and cyclicity in the chronostratigraphic context, the strata is divided into two 3rd order cycles. One complete sequence i.e Transgressive system tract (TST), Highstand system tract (HST) and Lowstand system tract (LST) are again replaced by another Transgressive system tract and Highstant system tract with no markers of sequence boundary. The LST sands are sandwiched between TST and HST shales but no potential porosity/permeability values have been determined. Microfacies and SEM studies revealed very fewer chances for hydrocarbon accumulation and overall reservoir potential is characterized as low.

Keywords: cycle, deposition, microfacies, reservoir

Procedia PDF Downloads 147
16260 A Comprehensive Review of Artificial Intelligence Applications in Sustainable Building

Authors: Yazan Al-Kofahi, Jamal Alqawasmi.

Abstract:

In this study, a comprehensive literature review (SLR) was conducted, with the main goal of assessing the existing literature about how artificial intelligence (AI), machine learning (ML), deep learning (DL) models are used in sustainable architecture applications and issues including thermal comfort satisfaction, energy efficiency, cost prediction and many others issues. For this reason, the search strategy was initiated by using different databases, including Scopus, Springer and Google Scholar. The inclusion criteria were used by two research strings related to DL, ML and sustainable architecture. Moreover, the timeframe for the inclusion of the papers was open, even though most of the papers were conducted in the previous four years. As a paper filtration strategy, conferences and books were excluded from database search results. Using these inclusion and exclusion criteria, the search was conducted, and a sample of 59 papers was selected as the final included papers in the analysis. The data extraction phase was basically to extract the needed data from these papers, which were analyzed and correlated. The results of this SLR showed that there are many applications of ML and DL in Sustainable buildings, and that this topic is currently trendy. It was found that most of the papers focused their discussions on addressing Environmental Sustainability issues and factors using machine learning predictive models, with a particular emphasis on the use of Decision Tree algorithms. Moreover, it was found that the Random Forest repressor demonstrates strong performance across all feature selection groups in terms of cost prediction of the building as a machine-learning predictive model.

Keywords: machine learning, deep learning, artificial intelligence, sustainable building

Procedia PDF Downloads 62
16259 Sinhala Sign Language to Grammatically Correct Sentences using NLP

Authors: Anjalika Fernando, Banuka Athuraliya

Abstract:

This paper presents a comprehensive approach for converting Sinhala Sign Language (SSL) into grammatically correct sentences using Natural Language Processing (NLP) techniques in real-time. While previous studies have explored various aspects of SSL translation, the research gap lies in the absence of grammar checking for SSL. This work aims to bridge this gap by proposing a two-stage methodology that leverages deep learning models to detect signs and translate them into coherent sentences, ensuring grammatical accuracy. The first stage of the approach involves the utilization of a Long Short-Term Memory (LSTM) deep learning model to recognize and interpret SSL signs. By training the LSTM model on a dataset of SSL gestures, it learns to accurately classify and translate these signs into textual representations. The LSTM model achieves a commendable accuracy rate of 94%, demonstrating its effectiveness in accurately recognizing and translating SSL gestures. Building upon the successful recognition and translation of SSL signs, the second stage of the methodology focuses on improving the grammatical correctness of the translated sentences. The project employs a Neural Machine Translation (NMT) architecture, consisting of an encoder and decoder with LSTM components, to enhance the syntactical structure of the generated sentences. By training the NMT model on a parallel corpus of Sinhala wrong sentences and their corresponding grammatically correct translations, it learns to generate coherent and grammatically accurate sentences. The NMT model achieves an impressive accuracy rate of 98%, affirming its capability to produce linguistically sound translations. The proposed approach offers significant contributions to the field of SSL translation and grammar correction. Addressing the critical issue of grammar checking, it enhances the usability and reliability of SSL translation systems, facilitating effective communication between hearing-impaired and non-sign language users. Furthermore, the integration of deep learning techniques, such as LSTM and NMT, ensures the accuracy and robustness of the translation process. This research holds great potential for practical applications, including educational platforms, accessibility tools, and communication aids for the hearing-impaired. Furthermore, it lays the foundation for future advancements in SSL translation systems, fostering inclusive and equal opportunities for the deaf community. Future work includes expanding the existing datasets to further improve the accuracy and generalization of the SSL translation system. Additionally, the development of a dedicated mobile application would enhance the accessibility and convenience of SSL translation on handheld devices. Furthermore, efforts will be made to enhance the current application for educational purposes, enabling individuals to learn and practice SSL more effectively. Another area of future exploration involves enabling two-way communication, allowing seamless interaction between sign-language users and non-sign-language users.In conclusion, this paper presents a novel approach for converting Sinhala Sign Language gestures into grammatically correct sentences using NLP techniques in real time. The two-stage methodology, comprising an LSTM model for sign detection and translation and an NMT model for grammar correction, achieves high accuracy rates of 94% and 98%, respectively. By addressing the lack of grammar checking in existing SSL translation research, this work contributes significantly to the development of more accurate and reliable SSL translation systems, thereby fostering effective communication and inclusivity for the hearing-impaired community

Keywords: Sinhala sign language, sign Language, NLP, LSTM, NMT

Procedia PDF Downloads 97
16258 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging

Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen

Abstract:

Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.

Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques

Procedia PDF Downloads 96
16257 NANCY: Combining Adversarial Networks with Cycle-Consistency for Robust Multi-Modal Image Registration

Authors: Mirjana Ruppel, Rajendra Persad, Amit Bahl, Sanja Dogramadzi, Chris Melhuish, Lyndon Smith

Abstract:

Multimodal image registration is a profoundly complex task which is why deep learning has been used widely to address it in recent years. However, two main challenges remain: Firstly, the lack of ground truth data calls for an unsupervised learning approach, which leads to the second challenge of defining a feasible loss function that can compare two images of different modalities to judge their level of alignment. To avoid this issue altogether we implement a generative adversarial network consisting of two registration networks GAB, GBA and two discrimination networks DA, DB connected by spatial transformation layers. GAB learns to generate a deformation field which registers an image of the modality B to an image of the modality A. To do that, it uses the feedback of the discriminator DB which is learning to judge the quality of alignment of the registered image B. GBA and DA learn a mapping from modality A to modality B. Additionally, a cycle-consistency loss is implemented. For this, both registration networks are employed twice, therefore resulting in images ˆA, ˆB which were registered to ˜B, ˜A which were registered to the initial image pair A, B. Thus the resulting and initial images of the same modality can be easily compared. A dataset of liver CT and MRI was used to evaluate the quality of our approach and to compare it against learning and non-learning based registration algorithms. Our approach leads to dice scores of up to 0.80 ± 0.01 and is therefore comparable to and slightly more successful than algorithms like SimpleElastix and VoxelMorph.

Keywords: cycle consistency, deformable multimodal image registration, deep learning, GAN

Procedia PDF Downloads 129
16256 Sustainable Milling Process for Tensile Specimens

Authors: Shilpa Kumari, Ramakumar Jayachandran

Abstract:

Machining of aluminium extrusion profiles in the automotive industry has gained much interest in the last decade, particularly due to the higher utilization of aluminum profiles and the weight reduction benefits it brings. Milling is the most common material removal process, where the rotary milling cutter is moved against a workpiece. The physical contact of the milling cutter to the workpiece increases the friction between them, thereby affecting the longevity of the milling tool and also the surface finish of the workpiece. To minimise this issue, the milling process uses cutting fluids or emulsions; however, the use of emulsion in the process has a negative impact on the environment ( such as consumption of water, oils and the used emulsion needs to be treated before disposal) and also on the personal ( may cause respiratory problems, exposure to microbial toxins generated by bacteria in the emulsions on prolonged use) working close to the process. Furthermore, the workpiece also needs to be cleaned after the milling process, which is not adding value to the process, and the cleaning also disperses mist of emulsion in the working environment. Hydro Extrusion is committed to improving the performance of sustainability from its operations, and with the negative impact of using emulsion in the milling process, a new innovative process- Dry Milling was developed to minimise the impact the cutting fluid brings. In this paper, the authors present one application of dry milling in the machining of tensile specimens in the laboratory. Dry milling is an innovative milling process without the use of any cooling/lubrication and has several advantages. Several million tensile tests are carried out in extrusion laboratories worldwide with the wet milling process. The machining of tensile specimens has a significant impact on the reliability of test results. The paper presents the results for different 6xxx alloys with different wall thicknesses of the specimens, which were machined by both dry and wet milling processes. For both different 6xxx alloys and different wall thicknesses, mechanical properties were similar for samples milled using dry and wet milling. Several tensile specimens were prepared using both dry and wet milling to compare the results, and the outcome showed the dry milling process does not affect the reliability of tensile test results.

Keywords: dry milling, tensile testing, wet milling, 6xxx alloy

Procedia PDF Downloads 196