Search results for: artificial intelligence in semiconductor manufacturing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4912

Search results for: artificial intelligence in semiconductor manufacturing

2692 Lean Production to Increase Reproducibility and Work Safety in the Laser Beam Melting Process Chain

Authors: C. Bay, A. Mahr, H. Groneberg, F. Döpper

Abstract:

Additive Manufacturing processes are becoming increasingly established in the industry for the economic production of complex prototypes and functional components. Laser beam melting (LBM), the most frequently used Additive Manufacturing technology for metal parts, has been gaining in industrial importance for several years. The LBM process chain – from material storage to machine set-up and component post-processing – requires many manual operations. These steps often depend on the manufactured component and are therefore not standardized. These operations are often not performed in a standardized manner, but depend on the experience of the machine operator, e.g., levelling of the build plate and adjusting the first powder layer in the LBM machine. This lack of standardization limits the reproducibility of the component quality. When processing metal powders with inhalable and alveolar particle fractions, the machine operator is at high risk due to the high reactivity and the toxic (e.g., carcinogenic) effect of the various metal powders. Faulty execution of the operation or unintentional omission of safety-relevant steps can impair the health of the machine operator. In this paper, all the steps of the LBM process chain are first analysed in terms of their influence on the two aforementioned challenges: reproducibility and work safety. Standardization to avoid errors increases the reproducibility of component quality as well as the adherence to and correct execution of safety-relevant operations. The corresponding lean method 5S will therefore be applied, in order to develop approaches in the form of recommended actions that standardize the work processes. These approaches will then be evaluated in terms of ease of implementation and their potential for improving reproducibility and work safety. The analysis and evaluation showed that sorting tools and spare parts as well as standardizing the workflow are likely to increase reproducibility. Organizing the operational steps and production environment decreases the hazards of material handling and consequently improves work safety.

Keywords: additive manufacturing, lean production, reproducibility, work safety

Procedia PDF Downloads 184
2691 Numerical Model for Investigation of Recombination Mechanisms in Graphene-Bonded Perovskite Solar Cells

Authors: Amir Sharifi Miavaghi

Abstract:

It is believed recombination mechnisms in graphene-bonded perovskite solar cells based on numerical model in which doped-graphene structures are employed as anode/cathode bonding semiconductor. Moreover, th‌‌‌‌e da‌‌‌‌‌rk-li‌‌‌‌‌ght c‌‌‌‌urrent d‌‌‌‌ens‌‌‌‌ity-vo‌‌‌‌‌‌‌ltage density-voltage cu‌‌‌‌‌‌‌‌‌‌‌rves are investigated by regression analysis. L‌‌‌oss m‌‌‌‌echa‌‌‌‌nisms suc‌‌‌h a‌‌‌‌‌‌s ba‌‌‌‌ck c‌‌‌ontact b‌‌‌‌‌arrier, d‌‌‌‌eep surface defect i‌‌‌‌n t‌‌‌‌‌‌‌he adsorbent la‌‌‌yer is det‌‌‌‌‌ermined b‌‌‌y adapting th‌‌‌e sim‌‌‌‌‌ulated ce‌‌‌‌‌ll perfor‌‌‌‌‌mance to t‌‌‌‌he measure‌‌‌‌ments us‌‌‌‌ing the diffe‌‌‌‌‌‌rential evolu‌‌‌‌‌tion of th‌‌‌‌e global optimization algorithm. T‌‌‌‌he performance of t‌‌‌he c‌‌‌‌ell i‌‌‌‌n the connection proc‌‌‌‌‌ess incl‌‌‌‌‌‌udes J-V cur‌‌‌‌‌‌ves that are examined at di‌‌‌‌‌fferent tempe‌‌‌‌‌‌‌ratures an‌‌‌d op‌‌‌‌en cir‌‌‌‌cuit vol‌‌‌‌tage (V) und‌‌‌‌er differ‌‌‌‌‌ent light intensities as a function of temperature. Ba‌‌‌‌sed o‌‌‌n t‌‌‌he prop‌‌‌‌osed nu‌‌‌‌‌merical mod‌‌‌‌el a‌‌‌‌nd the acquired lo‌‌‌‌ss mecha‌‌‌‌‌‌nisms, our approach can be used to improve the efficiency of the solar cell further. Due to the high demand for alternative energy sources, solar cells are good alternatives for energy storage using the photovoltaic phenomenon.

Keywords: numerical model, recombination mechanism, graphen, perovskite solarcell

Procedia PDF Downloads 69
2690 Sterols Regulate the Activity of Phospholipid Scramblase by Interacting through Putative Cholesterol Binding Motif

Authors: Muhasin Koyiloth, Sathyanarayana N. Gummadi

Abstract:

Biological membranes are ordered association of lipids, proteins, and carbohydrates. Lipids except sterols possess asymmetric distribution across the bilayer. Eukaryotic membranes possess a group of lipid translocators called scramblases that disrupt phospholipid asymmetry. Their action is implicated in cell activation during wound healing and phagocytic clearance of apoptotic cells. Cholesterol is one of the major membrane lipids distributed evenly on both the leaflet and can directly influence the membrane fluidity through the ordering effect. The fluidity has an impact on the activity of several membrane proteins. The palmitoylated phospholipid scramblases localized to the lipid raft which is characterized by a higher number of sterols. Here we propose that cholesterol can interact with scramblases through putative CRAC motif and can modulate their activity. To prove this, we reconstituted phospholipid scramblase 1 of C. elegans (SCRM-1) in proteoliposomes containing different amounts of cholesterol (Liquid ordered/Lo). We noted that the presence of cholesterol reduced the scramblase activity of wild-type SCRM-1. The interaction between SCRM-1 and cholesterol was confirmed by fluorescence spectroscopy using NBD-Chol. Also, we observed loss of such interaction when one of I273 in the CRAC motif mutated to Asp. Interestingly, the point mutant has partially retained scramblase activity in Lo vesicles. The current study elucidated the important interaction between cholesterol and SCRM-1 to fine-tune its activity in artificial membranes.

Keywords: artificial membranes, CRAC motif, plasma membrane, PL scramblase

Procedia PDF Downloads 175
2689 Short-Path Near-Infrared Laser Detection of Environmental Gases by Wavelength-Modulation Spectroscopy

Authors: Isao Tomita

Abstract:

The detection of environmental gases, 12CO_2, 13CO_2, and CH_4, using near-infrared semiconductor lasers with a short laser path length is studied by means of wavelength-modulation spectroscopy. The developed system is compact and has high sensitivity enough to detect the absorption peaks of isotopic 13CO_2 of a 3-% CO_2 gas at 2 um with a path length of 2.4 m, where its peak size is two orders of magnitude smaller than that of the ordinary 12CO_2 peaks. In addition, the detection of 12CO_2 peaks of a 385-ppm (0.0385-%) CO_2 gas in the air is made at 2 um with a path length of 1.4 m. Furthermore, in pursuing the detection of an ancient environmental CH_4 gas confined to a bubble in ice at the polar regions, measurements of the absorption spectrum for a trace gas of CH_4 in a small area are attempted. For a 100-% CH_4 gas trapped in a 1 mm^3 glass container, the absorption peaks of CH_4 are obtained at 1.65 um with a path length of 3 mm, and also the gas pressure is extrapolated from the measured data.

Keywords: environmental gases, Near-Infrared Laser Detection, Wavelength-Modulation Spectroscopy, gas pressure

Procedia PDF Downloads 423
2688 Improved Technology Portfolio Management via Sustainability Analysis

Authors: Ali Al-Shehri, Abdulaziz Al-Qasim, Abdulkarim Sofi, Ali Yousef

Abstract:

The oil and gas industry has played a major role in improving the prosperity of mankind and driving the world economy. According to the International Energy Agency (IEA) and Integrated Environmental Assessment (EIA) estimates, the world will continue to rely heavily on hydrocarbons for decades to come. This growing energy demand mandates taking sustainability measures to prolong the availability of reliable and affordable energy sources, and ensure lowering its environmental impact. Unlike any other industry, the oil and gas upstream operations are energy-intensive and scattered over large zonal areas. These challenging conditions require unique sustainability solutions. In recent years there has been a concerted effort by the oil and gas industry to develop and deploy innovative technologies to: maximize efficiency, reduce carbon footprint, reduce CO2 emissions, and optimize resources and material consumption. In the past, the main driver for research and development (R&D) in the exploration and production sector was primarily driven by maximizing profit through higher hydrocarbon recovery and new discoveries. Environmental-friendly and sustainable technologies are increasingly being deployed to balance sustainability and profitability. Analyzing technology and its sustainability impact is increasingly being used in corporate decision-making for improved portfolio management and allocating valuable resources toward technology R&D.This paper articulates and discusses a novel workflow to identify strategic sustainable technologies for improved portfolio management by addressing existing and future upstream challenges. It uses a systematic approach that relies on sustainability key performance indicators (KPI’s) including energy efficiency quotient, carbon footprint, and CO2 emissions. The paper provides examples of various technologies including CCS, reducing water cuts, automation, using renewables, energy efficiency, etc. The use of 4IR technologies such as Artificial Intelligence, Machine Learning, and Data Analytics are also discussed. Overlapping technologies, areas of collaboration and synergistic relationships are identified. The unique sustainability analyses provide improved decision-making on technology portfolio management.

Keywords: sustainability, oil& gas, technology portfolio, key performance indicator

Procedia PDF Downloads 183
2687 Preparation and Properties of PP/EPDM Reinforced with Graphene

Authors: M. Haghnegahdar, G. Naderi, M. H. R. Ghoreishy

Abstract:

Polypropylene(PP)/Ethylene Propylene Diene Monomer (EPDM) samples (80/20) containing 0, 0.5, 1, 1.5, 2, 2.5, and 3 (expressed in mass fraction) graphene were prepared using melt compounding method to investigate microstructure, mechanical properties, and thermal stability as well as electrical resistance of samples. X-Ray diffraction data confirmed that graphene platelets are well dispersed in PP/EPDM. Mechanical properties such as tensile strength, impact strength and hardness demonstrated increasing trend by graphene loading which exemplifies substantial reinforcing nature of this kind of nano filler and it's good interaction with polymer chains. At the same time it is found that thermo-oxidative degradation of PP/EPDM nanocomposites is noticeably retarded with the increasing of graphene content. Electrical surface resistivity of the nanocomposite was dramatically changed by forming electrical percolation threshold and leads to change electrical behavior from insulator to semiconductor. Furthermore, these results were confirmed by scanning electron microscopy(SEM), dynamic mechanical thermal analysis (DMTA), and transmission electron microscopy (TEM).

Keywords: nanocomposite, graphene, microstructure, mechanical properties

Procedia PDF Downloads 330
2686 Data Analytics in Energy Management

Authors: Sanjivrao Katakam, Thanumoorthi I., Antony Gerald, Ratan Kulkarni, Shaju Nair

Abstract:

With increasing energy costs and its impact on the business, sustainability today has evolved from a social expectation to an economic imperative. Therefore, finding methods to reduce cost has become a critical directive for Industry leaders. Effective energy management is the only way to cut costs. However, Energy Management has been a challenge because it requires a change in old habits and legacy systems followed for decades. Today exorbitant levels of energy and operational data is being captured and stored by Industries, but they are unable to convert these structured and unstructured data sets into meaningful business intelligence. It must be noted that for quick decisions, organizations must learn to cope with large volumes of operational data in different formats. Energy analytics not only helps in extracting inferences from these data sets, but also is instrumental in transformation from old approaches of energy management to new. This in turn assists in effective decision making for implementation. It is the requirement of organizations to have an established corporate strategy for reducing operational costs through visibility and optimization of energy usage. Energy analytics play a key role in optimization of operations. The paper describes how today energy data analytics is extensively used in different scenarios like reducing operational costs, predicting energy demands, optimizing network efficiency, asset maintenance, improving customer insights and device data insights. The paper also highlights how analytics helps transform insights obtained from energy data into sustainable solutions. The paper utilizes data from an array of segments such as retail, transportation, and water sectors.

Keywords: energy analytics, energy management, operational data, business intelligence, optimization

Procedia PDF Downloads 364
2685 Monitoring Large-Coverage Forest Canopy Height by Integrating LiDAR and Sentinel-2 Images

Authors: Xiaobo Liu, Rakesh Mishra, Yun Zhang

Abstract:

Continuous monitoring of forest canopy height with large coverage is essential for obtaining forest carbon stocks and emissions, quantifying biomass estimation, analyzing vegetation coverage, and determining biodiversity. LiDAR can be used to collect accurate woody vegetation structure such as canopy height. However, LiDAR’s coverage is usually limited because of its high cost and limited maneuverability, which constrains its use for dynamic and large area forest canopy monitoring. On the other hand, optical satellite images, like Sentinel-2, have the ability to cover large forest areas with a high repeat rate, but they do not have height information. Hence, exploring the solution of integrating LiDAR data and Sentinel-2 images to enlarge the coverage of forest canopy height prediction and increase the prediction repeat rate has been an active research topic in the environmental remote sensing community. In this study, we explore the potential of training a Random Forest Regression (RFR) model and a Convolutional Neural Network (CNN) model, respectively, to develop two predictive models for predicting and validating the forest canopy height of the Acadia Forest in New Brunswick, Canada, with a 10m ground sampling distance (GSD), for the year 2018 and 2021. Two 10m airborne LiDAR-derived canopy height models, one for 2018 and one for 2021, are used as ground truth to train and validate the RFR and CNN predictive models. To evaluate the prediction performance of the trained RFR and CNN models, two new predicted canopy height maps (CHMs), one for 2018 and one for 2021, are generated using the trained RFR and CNN models and 10m Sentinel-2 images of 2018 and 2021, respectively. The two 10m predicted CHMs from Sentinel-2 images are then compared with the two 10m airborne LiDAR-derived canopy height models for accuracy assessment. The validation results show that the mean absolute error (MAE) for year 2018 of the RFR model is 2.93m, CNN model is 1.71m; while the MAE for year 2021 of the RFR model is 3.35m, and the CNN model is 3.78m. These demonstrate the feasibility of using the RFR and CNN models developed in this research for predicting large-coverage forest canopy height at 10m spatial resolution and a high revisit rate.

Keywords: remote sensing, forest canopy height, LiDAR, Sentinel-2, artificial intelligence, random forest regression, convolutional neural network

Procedia PDF Downloads 92
2684 Determining Inventory Replenishment Policy for Major Component in Assembly-to-Order of Cooling System Manufacturing

Authors: Tippawan Nasawan

Abstract:

The objective of this study is to find the replenishment policy in Assembly-to-Order manufacturing (ATO) which some of the major components have lead-time longer than customer lead-time. The variety of products, independent component demand, and long component lead-time are the difficulty that has resulted in the overstock problem. In addition, the ordering cost is trivial when compared to the cost of material of the major component. A conceptual design of the Decision Supporting System (DSS) has introduced to assist the replenishment policy. Component replenishment by using the variable which calls Available to Promise (ATP) for making the decision is one of the keys. The Poisson distribution is adopted to realize demand patterns in order to calculate Safety Stock (SS) at the specified Customer Service Level (CSL). When distribution cannot identify, nonparametric will be applied instead. The test result after comparing the ending inventory between the new policy and the old policy, the overstock has significantly reduced by 46.9 percent or about 469,891.51 US-Dollars for the cost of the major component (material cost only). Besides, the number of the major component inventory is also reduced by about 41 percent which helps to mitigate the chance of damage and keeping stock.

Keywords: Assembly-to-Order, Decision Supporting System, Component replenishment , Poisson distribution

Procedia PDF Downloads 127
2683 Corporate Performance and Balance Sheet Indicators: Evidence from Indian Manufacturing Companies

Authors: Hussain Bohra, Pradyuman Sharma

Abstract:

This study highlights the significance of Balance Sheet Indicators on the corporate performance in the case of Indian manufacturing companies. Balance sheet indicators show the actual financial health of the company and it helps to the external investors to choose the right company for their investment and it also help to external financing agency to give easy finance to the manufacturing companies. The period of study is 2000 to 2014 for 813 manufacturing companies for which the continuous data is available throughout the study period. The data is collected from PROWESS data base maintained by Centre for Monitoring Indian Economy Pvt. Ltd. Panel data methods like fixed effect and random effect methods are used for the analysis. The Likelihood Ratio test, Lagrange Multiplier test and Hausman test results proof the suitability of the fixed effect model for the estimation. Return on assets (ROA) is used as the proxy to measure corporate performance. ROA is the best proxy to measure corporate performance as it already used by the most of the authors who worked on the corporate performance. ROA shows return on long term investment projects of firms. Different ratios like Current Ratio, Debt-equity ratio, Receivable turnover ratio, solvency ratio have been used as the proxies for the Balance Sheet Indicators. Other firm specific variable like firm size, and sales as the control variables in the model. From the empirical analysis, it was found that all selected financial ratios have significant and positive impact on the corporate performance. Firm sales and firm size also found significant and positive impact on the corporate performance. To check the robustness of results, the sample was divided on the basis of different ratio like firm having high debt equity ratio and low debt equity ratio, firms having high current ratio and low current ratio, firms having high receivable turnover and low receivable ratio and solvency ratio in the form of firms having high solving ratio and low solvency ratio. We find that the results are robust to all types of companies having different form of selected balance sheet indicators ratio. The results for other variables are also in the same line as for the whole sample. These findings confirm that Balance sheet indicators play as significant role on the corporate performance in India. The findings of this study have the implications for the corporate managers to focus different ratio to maintain the minimum expected level of performance. Apart from that, they should also maintain adequate sales and total assets to improve corporate performance.

Keywords: balance sheet, corporate performance, current ratio, panel data method

Procedia PDF Downloads 264
2682 Artificial Intelligence in Ethiopian Higher Education: The Impact of Digital Readiness Support, Acceptance, Risk, and Trust on Adoption

Authors: Merih Welay Welesilassie

Abstract:

Understanding educators' readiness to incorporate AI tools into their teaching methods requires comprehensively examining the influencing factors. This understanding is crucial, given the potential of these technologies to personalise learning experiences, improve instructional effectiveness, and foster innovative pedagogical approaches. This study evaluated factors affecting teachers' adoption of AI tools in their English language instruction by extending the Technology Acceptance Model (TAM) to encompass digital readiness support, perceived risk, and trust. A cross-sectional quantitative survey was conducted with 128 English language teachers, supplemented by qualitative data collection from 15 English teachers. The structural mode analysis indicated that implementing AI tools in Ethiopian higher education was notably influenced by digital readiness support, perceived ease of use, perceived usefulness, perceived risk, and trust. Digital readiness support positively impacted perceived ease of use, usefulness, and trust while reducing safety and privacy risks. Perceived ease of use positively correlated with perceived usefulness but negatively influenced trust. Furthermore, perceived usefulness strengthened trust in AI tools, while perceived safety and privacy risks significantly undermined trust. Trust was crucial in increasing educators' willingness to adopt AI technologies. The qualitative analysis revealed that the teachers exhibited strong content and pedagogical knowledge but needed more technology-related knowledge. Moreover, It was found that the teachers did not utilise digital tools to teach English. The study identified several obstacles to incorporating digital tools into English lessons, such as insufficient digital infrastructure, a shortage of educational resources, inadequate professional development opportunities, and challenging policies and governance. The findings provide valuable guidance for educators, inform policymakers about creating supportive digital environments, and offer a foundation for further investigation into technology adoption in educational settings in Ethiopia and similar contexts.

Keywords: digital readiness support, AI acceptance, perceived risc, AI trust

Procedia PDF Downloads 18
2681 The Effect of TQM Implementation on Bahrain Industrial Performance

Authors: Bader Al-Mannai, Saad Sulieman, Yaser Al-Alawi

Abstract:

Research studies worldwide undoubtedly demonstrated that the implementation of Total Quality Management (TQM) program can improve organizations competitive abilities and provide strategic quality advances. However, limited empirical studies and research are directed to measure the effectiveness of TQM implementation on the industrial and manufacturing organizations performance. Accordingly, this paper is aimed at discussing “the degree of TQM implementation in Bahrain industries and its effect on their performance”. The paper will present the measurement indicators and success factors that were used to assess the degree of TQM implementation in Bahrain industry, and the main performance indicators that were affected by TQM implementation. The adopted research methodology in this study was a survey that was based on self-completion questionnaire. The sample population represented the industrial and manufacturing organizations in Bahrain. The study led to the identification of the operational and strategic measurement indicators and success factors that assist organizations in realizing successful TQM implementation and performance improvement. Furthermore, the research analysis confirmed a positive and significant relationship between the examined performance indicators in Bahrain industry and TQM implementation. In conclusion the investigation of the relationship revealed that the implementation of TQM program has resulted into remarkable improvements on workforce, sales performance, and quality performance indicators in Bahrain industry.

Keywords: performance indicators, success factors, TQM implementation, Bahrain

Procedia PDF Downloads 552
2680 Methods for Material and Process Monitoring by Characterization of (Second and Third Order) Elastic Properties with Lamb Waves

Authors: R. Meier, M. Pander

Abstract:

In accordance with the industry 4.0 concept, manufacturing process steps as well as the materials themselves are going to be more and more digitalized within the next years. The “digital twin” representing the simulated and measured dataset of the (semi-finished) product can be used to control and optimize the individual processing steps and help to reduce costs and expenditure of time in product development, manufacturing, and recycling. In the present work, two material characterization methods based on Lamb waves were evaluated and compared. For demonstration purpose, both methods were shown at a standard industrial product - copper ribbons, often used in photovoltaic modules as well as in high-current microelectronic devices. By numerical approximation of the Rayleigh-Lamb dispersion model on measured phase velocities second order elastic constants (Young’s modulus, Poisson’s ratio) were determined. Furthermore, the effective third order elastic constants were evaluated by applying elastic, “non-destructive”, mechanical stress on the samples. In this way, small microstructural variations due to mechanical preconditioning could be detected for the first time. Both methods were compared with respect to precision and inline application capabilities. Microstructure of the samples was systematically varied by mechanical loading and annealing. Changes in the elastic ultrasound transport properties were correlated with results from microstructural analysis and mechanical testing. In summary, monitoring the elastic material properties of plate-like structures using Lamb waves is valuable for inline and non-destructive material characterization and manufacturing process control. Second order elastic constants analysis is robust over wide environmental and sample conditions, whereas the effective third order elastic constants highly increase the sensitivity with respect to small microstructural changes. Both Lamb wave based characterization methods are fitting perfectly into the industry 4.0 concept.

Keywords: lamb waves, industry 4.0, process control, elasticity, acoustoelasticity, microstructure

Procedia PDF Downloads 227
2679 Fast Estimation of Fractional Process Parameters in Rough Financial Models Using Artificial Intelligence

Authors: Dávid Kovács, Bálint Csanády, Dániel Boros, Iván Ivkovic, Lóránt Nagy, Dalma Tóth-Lakits, László Márkus, András Lukács

Abstract:

The modeling practice of financial instruments has seen significant change over the last decade due to the recognition of time-dependent and stochastically changing correlations among the market prices or the prices and market characteristics. To represent this phenomenon, the Stochastic Correlation Process (SCP) has come to the fore in the joint modeling of prices, offering a more nuanced description of their interdependence. This approach has allowed for the attainment of realistic tail dependencies, highlighting that prices tend to synchronize more during intense or volatile trading periods, resulting in stronger correlations. Evidence in statistical literature suggests that, similarly to the volatility, the SCP of certain stock prices follows rough paths, which can be described using fractional differential equations. However, estimating parameters for these equations often involves complex and computation-intensive algorithms, creating a necessity for alternative solutions. In this regard, the Fractional Ornstein-Uhlenbeck (fOU) process from the family of fractional processes offers a promising path. We can effectively describe the rough SCP by utilizing certain transformations of the fOU. We employed neural networks to understand the behavior of these processes. We had to develop a fast algorithm to generate a valid and suitably large sample from the appropriate process to train the network. With an extensive training set, the neural network can estimate the process parameters accurately and efficiently. Although the initial focus was the fOU, the resulting model displayed broader applicability, thus paving the way for further investigation of other processes in the realm of financial mathematics. The utility of SCP extends beyond its immediate application. It also serves as a springboard for a deeper exploration of fractional processes and for extending existing models that use ordinary Wiener processes to fractional scenarios. In essence, deploying both SCP and fractional processes in financial models provides new, more accurate ways to depict market dynamics.

Keywords: fractional Ornstein-Uhlenbeck process, fractional stochastic processes, Heston model, neural networks, stochastic correlation, stochastic differential equations, stochastic volatility

Procedia PDF Downloads 118
2678 Optimized Brain Computer Interface System for Unspoken Speech Recognition: Role of Wernicke Area

Authors: Nassib Abdallah, Pierre Chauvet, Abd El Salam Hajjar, Bassam Daya

Abstract:

In this paper, we propose an optimized brain computer interface (BCI) system for unspoken speech recognition, based on the fact that the constructions of unspoken words rely strongly on the Wernicke area, situated in the temporal lobe. Our BCI system has four modules: (i) the EEG Acquisition module based on a non-invasive headset with 14 electrodes; (ii) the Preprocessing module to remove noise and artifacts, using the Common Average Reference method; (iii) the Features Extraction module, using Wavelet Packet Transform (WPT); (iv) the Classification module based on a one-hidden layer artificial neural network. The present study consists of comparing the recognition accuracy of 5 Arabic words, when using all the headset electrodes or only the 4 electrodes situated near the Wernicke area, as well as the selection effect of the subbands produced by the WPT module. After applying the articial neural network on the produced database, we obtain, on the test dataset, an accuracy of 83.4% with all the electrodes and all the subbands of 8 levels of the WPT decomposition. However, by using only the 4 electrodes near Wernicke Area and the 6 middle subbands of the WPT, we obtain a high reduction of the dataset size, equal to approximately 19% of the total dataset, with 67.5% of accuracy rate. This reduction appears particularly important to improve the design of a low cost and simple to use BCI, trained for several words.

Keywords: brain-computer interface, speech recognition, artificial neural network, electroencephalography, EEG, wernicke area

Procedia PDF Downloads 272
2677 A Soft Switching PWM DC-DC Boost Converter with Increased Efficiency by Using ZVT-ZCT Techniques

Authors: Yakup Sahin, Naim Suleyman Ting, Ismail Aksoy

Abstract:

In this paper, an improved active snubber cell is proposed on account of soft switching (SS) family of pulse width modulation (PWM) DC-DC converters. The improved snubber cell provides zero-voltage transition (ZVT) turn on and zero-current transition (ZCT) turn off for main switch. The snubber cell decreases EMI noise and operates with SS in a wide range of line and load voltages. Besides, all of the semiconductor devices in the converter operate with SS. There is no additional voltage and current stress on the main devices. Additionally, extra voltage stress does not occur on the auxiliary switch and its current stress is acceptable value. The improved converter has a low cost and simple structure. The theoretical analysis of converter is clarified and the operating states are given in detail. The experimental results of converter are obtained by prototype of 500 W and 100 kHz. It is observed that the experimental results and theoretical analysis of converter are suitable with each other perfectly.

Keywords: active snubber cells, DC-DC converters, zero-voltage transition, zero-current transition

Procedia PDF Downloads 1020
2676 Electrodeposition of NiO Films from Organic Solvent-Based Electrolytic Solutions for Solar Cell Application

Authors: Thierry Pauporté, Sana Koussi, Fabrice Odobel

Abstract:

The preparation of semiconductor oxide layers and structures by soft techniques is an important field of research. Higher performances are expected from the optimizing of the oxide films and then use of new methods of preparation for a better control of their chemical, morphological, electrical and optical properties. We present the preparation of NiO by electrodeposition from pure polar aprotic medium and mixtures with water. The effect of the solvent, of the electrochemical deposition parameters and post-deposition annealing treatment on the structural, morphological and optical properties of the films is investigated. We remarkably show that the solvent is inserted in the deposited layer and act as a blowing agent, giving rise to mesoporous films after elimination by thermal annealing. These layers of p-type oxide have been successfully used, after sensitization by a dye, in p-type dye-sensitized solar cells. The effects of the solvent on the layer properties and the application of these layers in p-type dye-sensitized solar cells are described.

Keywords: NiO, layer, p-type sensitized solar cells, electrodeposition

Procedia PDF Downloads 297
2675 Mechanical Response Investigation of Wafer Probing Test with Vertical Cobra Probe via the Experiment and Transient Dynamic Simulation

Authors: De-Shin Liu, Po-Chun Wen, Zhen-Wei Zhuang, Hsueh-Chih Liu, Pei-Chen Huang

Abstract:

Wafer probing tests play an important role in semiconductor manufacturing procedures in accordance with the yield and reliability requirement of the wafer after the backend-of-the-line process. Accordingly, the stable physical and electrical contact between the probe and the tested wafer during wafer probing is regarded as an essential issue in identifying the known good die. The probe card can be integrated with multiple probe needles, which are classified as vertical, cantilever and micro-electro-mechanical systems type probe selections. Among all potential probe types, the vertical probe has several advantages as compared with other probe types, including maintainability, high probe density and feasibility for high-speed wafer testing. In the present study, the mechanical response of the wafer probing test with the vertical cobra probe on 720 μm thick silicon (Si) substrate with a 1.4 μm thick aluminum (Al) pad is investigated by the experiment and transient dynamic simulation approach. Because the deformation mechanism of the vertical cobra probe is determined by both bending and buckling mechanisms, the stable correlation between contact forces and overdrive (OD) length must be carefully verified. Moreover, the decent OD length with corresponding contact force contributed to piercing the native oxide layer of the Al pad and preventing the probing test-induced damage on the interconnect system. Accordingly, the scratch depth of the Al pad under various OD lengths is estimated by the atomic force microscope (AFM) and simulation work. In the wafer probing test configuration, the contact phenomenon between the probe needle and the tested object introduced large deformation and twisting of mesh gridding, causing the subsequent numerical divergence issue. For this reason, the arbitrary Lagrangian-Eulerian method is utilized in the present simulation work to conquer the aforementioned issue. The analytic results revealed a slight difference when the OD is considered as 40 μm, and the simulated is almost identical to the measured scratch depths of the Al pad under higher OD lengths up to 70 μm. This phenomenon can be attributed to the unstable contact of the probe at low OD length with the scratch depth below 30% of Al pad thickness, and the contact status will be being stable when the scratch depth over 30% of pad thickness. The splash of the Al pad is observed by the AFM, and the splashed Al debris accumulates on a specific side; this phenomenon is successfully simulated in the transient dynamic simulation. Thus, the preferred testing OD lengths are found as 45 μm to 70 μm, and the corresponding scratch depths on the Al pad are represented as 31.4% and 47.1% of Al pad thickness, respectively. The investigation approach demonstrated in this study contributed to analyzing the mechanical response of wafer probing test configuration under large strain conditions and assessed the geometric designs and material selections of probe needles to meet the requirement of high resolution and high-speed wafer-level probing test for thinned wafer application.

Keywords: wafer probing test, vertical probe, probe mark, mechanical response, FEA simulation

Procedia PDF Downloads 57
2674 The Human Process of Trust in Automated Decisions and Algorithmic Explainability as a Fundamental Right in the Exercise of Brazilian Citizenship

Authors: Paloma Mendes Saldanha

Abstract:

Access to information is a prerequisite for democracy while also guiding the material construction of fundamental rights. The exercise of citizenship requires knowing, understanding, questioning, advocating for, and securing rights and responsibilities. In other words, it goes beyond mere active electoral participation and materializes through awareness and the struggle for rights and responsibilities in the various spaces occupied by the population in their daily lives. In times of hyper-cultural connectivity, active citizenship is shaped through ethical trust processes, most often established between humans and algorithms. Automated decisions, so prevalent in various everyday situations, such as purchase preference predictions, virtual voice assistants, reduction of accidents in autonomous vehicles, content removal, resume selection, etc., have already found their place as a normalized discourse that sometimes does not reveal or make clear what violations of fundamental rights may occur when algorithmic explainability is lacking. In other words, technological and market development promotes a normalization for the use of automated decisions while silencing possible restrictions and/or breaches of rights through a culturally modeled, unethical, and unexplained trust process, which hinders the possibility of the right to a healthy, transparent, and complete exercise of citizenship. In this context, the article aims to identify the violations caused by the absence of algorithmic explainability in the exercise of citizenship through the construction of an unethical and silent trust process between humans and algorithms in automated decisions. As a result, it is expected to find violations of constitutionally protected rights such as privacy, data protection, and transparency, as well as the stipulation of algorithmic explainability as a fundamental right in the exercise of Brazilian citizenship in the era of virtualization, facing a threefold foundation called trust: culture, rules, and systems. To do so, the author will use a bibliographic review in the legal and information technology fields, as well as the analysis of legal and official documents, including national documents such as the Brazilian Federal Constitution, as well as international guidelines and resolutions that address the topic in a specific and necessary manner for appropriate regulation based on a sustainable trust process for a hyperconnected world.

Keywords: artificial intelligence, ethics, citizenship, trust

Procedia PDF Downloads 64
2673 A Study of Industry 4.0 and Digital Transformation

Authors: Ibrahim Bashir, Yahaya Y. Yusuf

Abstract:

The ongoing shift towards Industry 4.0 represents a critical growth factor in the industrial enterprise, where the digital transformation of industries is increasingly seen as a crucial element for competitiveness. This transformation holds substantial potential, yet its full benefits have yet to be realized due to the fragmented approach to introducing Industry 4.0 technologies. Therefore, this pilot study aims to explore the individual and collective impact of Industry 4.0 technologies and digital transformation on organizational performance. Data were collected through a questionnaire-based survey across 51 companies in the manufacturing industry in the United Kingdom. The correlations and multiple linear regression analyses were conducted to assess the relationship and impact between the variables in the study. The results show that Industry 4.0 and digital transformation positively influence organizational performance and that Industry 4.0 technologies positively influence digital transformation. The results of this pilot study indicate that the implementation of Industry 4.0 technology is vital for increasing organizational performance; however, their roles differ largely. The differences are manifest in how the types of Industry 4.0 technologies correlate with how organizations integrate digital technologies into their operations. Hence, there is a clear indication of a strong correlation between Industry 4.0 technology, digital transformation, and organizational performance. Consequently, our study presents numerous pertinent implications that propel the theory of I4.0, digital business transformation (DBT), and organizational performance forward, as well as guide managers in the manufacturing sector.

Keywords: industry 4.0 technologies, digital transformation, digital integration, organizational performance

Procedia PDF Downloads 140
2672 Artificial Intelligence in Ethiopian Universities: The Influence of Technological Readiness, Acceptance, Perceived Risk, and Trust on Implementation - An Integrative Research Approach

Authors: Merih Welay Welesilassie

Abstract:

Understanding educators' readiness to incorporate AI tools into their teaching methods requires comprehensively examining the influencing factors. This understanding is crucial, given the potential of these technologies to personalise learning experiences, improve instructional effectiveness, and foster innovative pedagogical approaches. This study evaluated factors affecting teachers' adoption of AI tools in their English language instruction by extending the Technology Acceptance Model (TAM) to encompass digital readiness support, perceived risk, and trust. A cross-sectional quantitative survey was conducted with 128 English language teachers, supplemented by qualitative data collection from 15 English teachers. The structural mode analysis indicated that implementing AI tools in Ethiopian higher education was notably influenced by digital readiness support, perceived ease of use, perceived usefulness, perceived risk, and trust. Digital readiness support positively impacted perceived ease of use, usefulness, and trust while reducing safety and privacy risks. Perceived ease of use positively correlated with perceived usefulness but negatively influenced trust. Furthermore, perceived usefulness strengthened trust in AI tools, while perceived safety and privacy risks significantly undermined trust. Trust was crucial in increasing educators' willingness to adopt AI technologies. The qualitative analysis revealed that the teachers exhibited strong content and pedagogical knowledge but needed more technology-related knowledge. Moreover, It was found that the teachers did not utilise digital tools to teach English. The study identified several obstacles to incorporating digital tools into English lessons, such as insufficient digital infrastructure, a shortage of educational resources, inadequate professional development opportunities, and challenging policies and governance. The findings provide valuable guidance for educators, inform policymakers about creating supportive digital environments, and offer a foundation for further investigation into technology adoption in educational settings in Ethiopia and similar contexts.

Keywords: digital readiness support, AI acceptance, risk, trust

Procedia PDF Downloads 15
2671 Simulation of Single-Track Laser Melting on IN718 using Material Point Method

Authors: S. Kadiyala, M. Berzins, D. Juba, W. Keyrouz

Abstract:

This paper describes the Material Point Method (MPM) for simulating a single-track laser melting process on an IN718 solid plate. MPM, known for simulating challenging multiphysics problems, is used to model the intricate thermal, mechanical, and fluid interactions during the laser sintering process. This study analyzes the formation of single tracks, exploring the impact of varying laser parameters such as speed, power, and spot diameter on the melt pool and track formation. The focus is on MPM’s ability to accurately simulate and capture the transient thermo-mechanical and phase change phenomena, which are critical in predicting the cooling rates before and after solidification of the laser track and the final melt pool geometry. The simulation results are rigorously compared with experimental data (AMB2022 benchmarks), demonstrating the effectiveness of MPM in replicating the physical processes in laser sintering. This research highlights the potential of MPM in advancing the understanding and simulation of melt pool physics in metal additive manufacturing, paving the way for optimized process parameters and improved material performance.

Keywords: dditive manufacturing simulation, material point method, phase change, melt pool physics

Procedia PDF Downloads 59
2670 Reading and Writing Memories in Artificial and Human Reasoning

Authors: Ian O'Loughlin

Abstract:

Memory networks aim to integrate some of the recent successes in machine learning with a dynamic memory base that can be updated and deployed in artificial reasoning tasks. These models involve training networks to identify, update, and operate over stored elements in a large memory array in order, for example, to ably perform question and answer tasks parsing real-world and simulated discourses. This family of approaches still faces numerous challenges: the performance of these network models in simulated domains remains considerably better than in open, real-world domains, wide-context cues remain elusive in parsing words and sentences, and even moderately complex sentence structures remain problematic. This innovation, employing an array of stored and updatable ‘memory’ elements over which the system operates as it parses text input and develops responses to questions, is a compelling one for at least two reasons: first, it addresses one of the difficulties that standard machine learning techniques face, by providing a way to store a large bank of facts, offering a way forward for the kinds of long-term reasoning that, for example, recurrent neural networks trained on a corpus have difficulty performing. Second, the addition of a stored long-term memory component in artificial reasoning seems psychologically plausible; human reasoning appears replete with invocations of long-term memory, and the stored but dynamic elements in the arrays of memory networks are deeply reminiscent of the way that human memory is readily and often characterized. However, this apparent psychological plausibility is belied by a recent turn in the study of human memory in cognitive science. In recent years, the very notion that there is a stored element which enables remembering, however dynamic or reconstructive it may be, has come under deep suspicion. In the wake of constructive memory studies, amnesia and impairment studies, and studies of implicit memory—as well as following considerations from the cognitive neuroscience of memory and conceptual analyses from the philosophy of mind and cognitive science—researchers are now rejecting storage and retrieval, even in principle, and instead seeking and developing models of human memory wherein plasticity and dynamics are the rule rather than the exception. In these models, storage is entirely avoided by modeling memory using a recurrent neural network designed to fit a preconceived energy function that attains zero values only for desired memory patterns, so that these patterns are the sole stable equilibrium points in the attractor network. So although the array of long-term memory elements in memory networks seem psychologically appropriate for reasoning systems, they may actually be incurring difficulties that are theoretically analogous to those that older, storage-based models of human memory have demonstrated. The kind of emergent stability found in the attractor network models more closely fits our best understanding of human long-term memory than do the memory network arrays, despite appearances to the contrary.

Keywords: artificial reasoning, human memory, machine learning, neural networks

Procedia PDF Downloads 271
2669 Simulation and Experimental Research on Pocketing Operation for Toolpath Optimization in CNC Milling

Authors: Rakesh Prajapati, Purvik Patel, Avadhoot Rajurkar

Abstract:

Nowadays, manufacturing industries augment their production lines with modern machining centers backed by CAM software. Several attempts are being made to cut down the programming time for machining complex geometries. Special programs/software have been developed to generate the digital numerical data and to prepare NC programs by using suitable post-processors for different machines. By selecting the tools and manufacturing process then applying tool paths and NC program are generated. More and more complex mechanical parts that earlier were being cast and assembled/manufactured by other processes are now being machined. Majority of these parts require lots of pocketing operations and find their applications in die and mold, turbo machinery, aircraft, nuclear, defense etc. Pocketing operations involve removal of large quantity of material from the metal surface. The modeling of warm cast and clamping a piece of food processing parts which the used of Pro-E and MasterCAM® software. Pocketing operation has been specifically chosen for toolpath optimization. Then after apply Pocketing toolpath, Multi Tool Selection and Reduce Air Time give the results of software simulation time and experimental machining time.

Keywords: toolpath, part program, optimization, pocket

Procedia PDF Downloads 288
2668 Artificial Neural Networks Application on Nusselt Number and Pressure Drop Prediction in Triangular Corrugated Plate Heat Exchanger

Authors: Hany Elsaid Fawaz Abdallah

Abstract:

This study presents a new artificial neural network(ANN) model to predict the Nusselt Number and pressure drop for the turbulent flow in a triangular corrugated plate heat exchanger for forced air and turbulent water flow. An experimental investigation was performed to create a new dataset for the Nusselt Number and pressure drop values in the following range of dimensionless parameters: The plate corrugation angles (from 0° to 60°), the Reynolds number (from 10000 to 40000), pitch to height ratio (from 1 to 4), and Prandtl number (from 0.7 to 200). Based on the ANN performance graph, the three-layer structure with {12-8-6} hidden neurons has been chosen. The training procedure includes back-propagation with the biases and weight adjustment, the evaluation of the loss function for the training and validation dataset and feed-forward propagation of the input parameters. The linear function was used at the output layer as the activation function, while for the hidden layers, the rectified linear unit activation function was utilized. In order to accelerate the ANN training, the loss function minimization may be achieved by the adaptive moment estimation algorithm (ADAM). The ‘‘MinMax’’ normalization approach was utilized to avoid the increase in the training time due to drastic differences in the loss function gradients with respect to the values of weights. Since the test dataset is not being used for the ANN training, a cross-validation technique is applied to the ANN network using the new data. Such procedure was repeated until loss function convergence was achieved or for 4000 epochs with a batch size of 200 points. The program code was written in Python 3.0 using open-source ANN libraries such as Scikit learn, TensorFlow and Keras libraries. The mean average percent error values of 9.4% for the Nusselt number and 8.2% for pressure drop for the ANN model have been achieved. Therefore, higher accuracy compared to the generalized correlations was achieved. The performance validation of the obtained model was based on a comparison of predicted data with the experimental results yielding excellent accuracy.

Keywords: artificial neural networks, corrugated channel, heat transfer enhancement, Nusselt number, pressure drop, generalized correlations

Procedia PDF Downloads 87
2667 Comparing Machine Learning Estimation of Fuel Consumption of Heavy-Duty Vehicles

Authors: Victor Bodell, Lukas Ekstrom, Somayeh Aghanavesi

Abstract:

Fuel consumption (FC) is one of the key factors in determining expenses of operating a heavy-duty vehicle. A customer may therefore request an estimate of the FC of a desired vehicle. The modular design of heavy-duty vehicles allows their construction by specifying the building blocks, such as gear box, engine and chassis type. If the combination of building blocks is unprecedented, it is unfeasible to measure the FC, since this would first r equire the construction of the vehicle. This paper proposes a machine learning approach to predict FC. This study uses around 40,000 vehicles specific and o perational e nvironmental c onditions i nformation, such as road slopes and driver profiles. A ll v ehicles h ave d iesel engines and a mileage of more than 20,000 km. The data is used to investigate the accuracy of machine learning algorithms Linear regression (LR), K-nearest neighbor (KNN) and Artificial n eural n etworks (ANN) in predicting fuel consumption for heavy-duty vehicles. Performance of the algorithms is evaluated by reporting the prediction error on both simulated data and operational measurements. The performance of the algorithms is compared using nested cross-validation and statistical hypothesis testing. The statistical evaluation procedure finds that ANNs have the lowest prediction error compared to LR and KNN in estimating fuel consumption on both simulated and operational data. The models have a mean relative prediction error of 0.3% on simulated data, and 4.2% on operational data.

Keywords: artificial neural networks, fuel consumption, friedman test, machine learning, statistical hypothesis testing

Procedia PDF Downloads 178
2666 The Influence of Operational Changes on Efficiency and Sustainability of Manufacturing Firms

Authors: Dimitrios Kafetzopoulos

Abstract:

Nowadays, companies are more concerned with adopting their own strategies for increased efficiency and sustainability. Dynamic environments are fertile fields for developing operational changes. For this purpose, organizations need to implement an advanced management philosophy that boosts changes to companies’ operation. Changes refer to new applications of knowledge, ideas, methods, and skills that can generate unique capabilities and leverage an organization’s competitiveness. So, in order to survive and compete in the global and niche markets, companies should incorporate the adoption of operational changes into their strategy with regard to their products and their processes. Creating the appropriate culture for changes in terms of products and processes helps companies to gain a sustainable competitive advantage in the market. Thus, the purpose of this study is to investigate the role of both incremental and radical changes into operations of a company, taking into consideration not only product changes but also process changes, and continues by measuring the impact of these two types of changes on business efficiency and sustainability of Greek manufacturing companies. The above discussion leads to the following hypotheses: H1: Radical operational changes have a positive impact on firm efficiency. H2: Incremental operational changes have a positive impact on firm efficiency. H3: Radical operational changes have a positive impact on firm sustainability. H4: Incremental operational changes have a positive impact on firm sustainability. In order to achieve the objectives of the present study, a research study was carried out in Greek manufacturing firms. A total of 380 valid questionnaires were received while a seven-point Likert scale was used to measure all the questionnaire items of the constructs (radical changes, incremental changes, efficiency and sustainability). The constructs of radical and incremental operational changes, each one as one variable, has been subdivided into product and process changes. Non-response bias, common method variance, multicollinearity, multivariate normal distribution and outliers have been checked. Moreover, the unidimensionality, reliability and validity of the latent factors were assessed. Exploratory Factor Analysis and Confirmatory Factor Analysis were applied to check the factorial structure of the constructs and the factor loadings of the items. In order to test the research hypotheses, the SEM technique was applied (maximum likelihood method). The goodness of fit of the basic structural model indicates an acceptable fit of the proposed model. According to the present study findings, radical operational changes and incremental operational changes significantly influence both efficiency and sustainability of Greek manufacturing firms. However, it is in the dimension of radical operational changes, meaning those in process and product, that the most significant contributors to firm efficiency are to be found, while its influence on sustainability is low albeit statistically significant. On the contrary, incremental operational changes influence sustainability more than firms’ efficiency. From the above, it is apparent that the embodiment of the concept of the changes into the products and processes operational practices of a firm has direct and positive consequences for what it achieves from efficiency and sustainability perspective.

Keywords: incremental operational changes, radical operational changes, efficiency, sustainability

Procedia PDF Downloads 136
2665 Aromatic Medicinal Plant Classification Using Deep Learning

Authors: Tsega Asresa Mengistu, Getahun Tigistu

Abstract:

Computer vision is an artificial intelligence subfield that allows computers and systems to retrieve meaning from digital images. It is applied in various fields of study self-driving cars, video surveillance, agriculture, Quality control, Health care, construction, military, and everyday life. Aromatic and medicinal plants are botanical raw materials used in cosmetics, medicines, health foods, and other natural health products for therapeutic and Aromatic culinary purposes. Herbal industries depend on these special plants. These plants and their products not only serve as a valuable source of income for farmers and entrepreneurs, and going to export not only industrial raw materials but also valuable foreign exchange. There is a lack of technologies for the classification and identification of Aromatic and medicinal plants in Ethiopia. The manual identification system of plants is a tedious, time-consuming, labor, and lengthy process. For farmers, industry personnel, academics, and pharmacists, it is still difficult to identify parts and usage of plants before ingredient extraction. In order to solve this problem, the researcher uses a deep learning approach for the efficient identification of aromatic and medicinal plants by using a convolutional neural network. The objective of the proposed study is to identify the aromatic and medicinal plant Parts and usages using computer vision technology. Therefore, this research initiated a model for the automatic classification of aromatic and medicinal plants by exploring computer vision technology. Morphological characteristics are still the most important tools for the identification of plants. Leaves are the most widely used parts of plants besides the root, flower and fruit, latex, and barks. The study was conducted on aromatic and medicinal plants available in the Ethiopian Institute of Agricultural Research center. An experimental research design is proposed for this study. This is conducted in Convolutional neural networks and Transfer learning. The Researcher employs sigmoid Activation as the last layer and Rectifier liner unit in the hidden layers. Finally, the researcher got a classification accuracy of 66.4 in convolutional neural networks and 67.3 in mobile networks, and 64 in the Visual Geometry Group.

Keywords: aromatic and medicinal plants, computer vision, deep convolutional neural network

Procedia PDF Downloads 438
2664 Pharmacovigilance: An Empowerment in Safe Utilization of Pharmaceuticals

Authors: Pankaj Prashar, Bimlesh Kumar, Ankita Sood, Anamika Gautam

Abstract:

Pharmacovigilance (PV) is a rapidly growing discipline in pharmaceutical industries as an integral part of clinical research and drug development over the past few decades. PV carries a breadth of scope from drug manufacturing to its regulation with safer utilization. The fundamental steps of PV not only includes data collection and verification, coding of drugs with adverse drug reactions, causality assessment and timely reporting to the authorities but also monitoring drug manufacturing, safety issues, product quality and conduction of due diligence. Standardization of adverse event information, collaboration of multiple departments in different companies, preparation of documents in accordance to both governmental as well as non-governmental organizations (FDA, EMA, GVP, ICH) are the advancements in discipline of PV. De-harmonization, lack of predictive drug safety models, improper funding by government, non-reporting, and non-acceptability of ADRs by developing countries and reports directly from patients to the monitoring centres respectively are the major road backs of PV. Mandatory pharmacovigilance reporting, frequent inspections, funding by government, educating and training medical students, pharmacists and nurses in this segment can bring about empowerment in PV. This area needs to be addressed with a sense of urgency for the safe utilization of pharmaceuticals.

Keywords: pharmacovigilance, regulatory, adverse event, drug safety

Procedia PDF Downloads 124
2663 Prediction of Distillation Curve and Reid Vapor Pressure of Dual-Alcohol Gasoline Blends Using Artificial Neural Network for the Determination of Fuel Performance

Authors: Leonard D. Agana, Wendell Ace Dela Cruz, Arjan C. Lingaya, Bonifacio T. Doma Jr.

Abstract:

The purpose of this paper is to study the predict the fuel performance parameters, which include drivability index (DI), vapor lock index (VLI), and vapor lock potential using distillation curve and Reid vapor pressure (RVP) of dual alcohol-gasoline fuel blends. Distillation curve and Reid vapor pressure were predicted using artificial neural networks (ANN) with macroscopic properties such as boiling points, RVP, and molecular weights as the input layers. The ANN consists of 5 hidden layers and was trained using Bayesian regularization. The training mean square error (MSE) and R-value for the ANN of RVP are 91.4113 and 0.9151, respectively, while the training MSE and R-value for the distillation curve are 33.4867 and 0.9927. Fuel performance analysis of the dual alcohol–gasoline blends indicated that highly volatile gasoline blended with dual alcohols results in non-compliant fuel blends with D4814 standard. Mixtures of low-volatile gasoline and 10% methanol or 10% ethanol can still be blended with up to 10% C3 and C4 alcohols. Intermediate volatile gasoline containing 10% methanol or 10% ethanol can still be blended with C3 and C4 alcohols that have low RVPs, such as 1-propanol, 1-butanol, 2-butanol, and i-butanol. Biography: Graduate School of Chemical, Biological, and Materials Engineering and Sciences, Mapua University, Muralla St., Intramuros, Manila, 1002, Philippines

Keywords: dual alcohol-gasoline blends, distillation curve, machine learning, reid vapor pressure

Procedia PDF Downloads 101