Search results for: input impedance
946 Multi-Stage Multi-Period Production Planning in Wire and Cable Industry
Authors: Mahnaz Hosseinzadeh, Shaghayegh Rezaee Amiri
Abstract:
This paper presents a methodology for serial production planning problem in wire and cable manufacturing process that addresses the problem of input-output imbalance in different consecutive stations, hoping to minimize the halt of machines in each stage. To this end, a linear Goal Programming (GP) model is developed, in which four main categories of constraints as per the number of runs per machine, machines’ sequences, acceptable inventories of machines at the end of each period, and the necessity of fulfillment of the customers’ orders are considered. The model is formulated based upon on the real data obtained from IKO TAK Company, an important supplier of wire and cable for oil and gas and automotive industries in Iran. By solving the model in GAMS software the optimal number of runs, end-of-period inventories, and the possible minimum idle time for each machine are calculated. The application of the numerical results in the target company has shown the efficiency of the proposed model and the solution in decreasing the lead time of the end product delivery to the customers by 20%. Accordingly, the developed model could be easily applied in wire and cable companies for the aim of optimal production planning to reduce the halt of machines in manufacturing stages.Keywords: goal programming approach, GP, production planning, serial manufacturing process, wire and cable industry
Procedia PDF Downloads 160945 Prediction of Oil Recovery Factor Using Artificial Neural Network
Authors: O. P. Oladipo, O. A. Falode
Abstract:
The determination of Recovery Factor is of great importance to the reservoir engineer since it relates reserves to the initial oil in place. Reserves are the producible portion of reservoirs and give an indication of the profitability of a field Development. The core objective of this project is to develop an artificial neural network model using selected reservoir data to predict Recovery Factors (RF) of hydrocarbon reservoirs and compare the model with a couple of the existing correlations. The type of Artificial Neural Network model developed was the Single Layer Feed Forward Network. MATLAB was used as the network simulator and the network was trained using the supervised learning method, Afterwards, the network was tested with input data never seen by the network. The results of the predicted values of the recovery factors of the Artificial Neural Network Model, API Correlation for water drive reservoirs (Sands and Sandstones) and Guthrie and Greenberger Correlation Equation were obtained and compared. It was noted that the coefficient of correlation of the Artificial Neural Network Model was higher than the coefficient of correlations of the other two correlation equations, thus making it a more accurate prediction tool. The Artificial Neural Network, because of its accurate prediction ability is helpful in the correct prediction of hydrocarbon reservoir factors. Artificial Neural Network could be applied in the prediction of other Petroleum Engineering parameters because it is able to recognise complex patterns of data set and establish a relationship between them.Keywords: recovery factor, reservoir, reserves, artificial neural network, hydrocarbon, MATLAB, API, Guthrie, Greenberger
Procedia PDF Downloads 441944 Teaching Tools for Web Processing Services
Authors: Rashid Javed, Hardy Lehmkuehler, Franz Josef-Behr
Abstract:
Web Processing Services (WPS) have up growing concern in geoinformation research. However, teaching about them is difficult because of the generally complex circumstances of their use. They limit the possibilities for hands- on- exercises on Web Processing Services. To support understanding however a Training Tools Collection was brought on the way at University of Applied Sciences Stuttgart (HFT). It is limited to the scope of Geostatistical Interpolation of sample point data where different algorithms can be used like IDW, Nearest Neighbor etc. The Tools Collection aims to support understanding of the scope, definition and deployment of Web Processing Services. For example it is necessary to characterize the input of Interpolation by the data set, the parameters for the algorithm and the interpolation results (here a grid of interpolated values is assumed). This paper reports on first experiences using a pilot installation. This was intended to find suitable software interfaces for later full implementations and conclude on potential user interface characteristics. Experiences were made with Deegree software, one of several Services Suites (Collections). Being strictly programmed in Java, Deegree offers several OGC compliant Service Implementations that also promise to be of benefit for the project. The mentioned parameters for a WPS were formalized following the paradigm that any meaningful component will be defined in terms of suitable standards. E.g. the data output can be defined as a GML file. But, the choice of meaningful information pieces and user interactions is not free but partially determined by the selected WPS Processing Suite.Keywords: deegree, interpolation, IDW, web processing service (WPS)
Procedia PDF Downloads 355943 Using Machine Learning to Classify Human Fetal Health and Analyze Feature Importance
Authors: Yash Bingi, Yiqiao Yin
Abstract:
Reduction of child mortality is an ongoing struggle and a commonly used factor in determining progress in the medical field. The under-5 mortality number is around 5 million around the world, with many of the deaths being preventable. In light of this issue, Cardiotocograms (CTGs) have emerged as a leading tool to determine fetal health. By using ultrasound pulses and reading the responses, CTGs help healthcare professionals assess the overall health of the fetus to determine the risk of child mortality. However, interpreting the results of the CTGs is time-consuming and inefficient, especially in underdeveloped areas where an expert obstetrician is hard to come by. Using a support vector machine (SVM) and oversampling, this paper proposed a model that classifies fetal health with an accuracy of 99.59%. To further explain the CTG measurements, an algorithm based on Randomized Input Sampling for Explanation ((RISE) of Black-box Models was created, called Feature Alteration for explanation of Black Box Models (FAB), and compared the findings to Shapley Additive Explanations (SHAP) and Local Interpretable Model Agnostic Explanations (LIME). This allows doctors and medical professionals to classify fetal health with high accuracy and determine which features were most influential in the process.Keywords: machine learning, fetal health, gradient boosting, support vector machine, Shapley values, local interpretable model agnostic explanations
Procedia PDF Downloads 144942 Spatiotemporal Analysis of Visual Evoked Responses Using Dense EEG
Authors: Rima Hleiss, Elie Bitar, Mahmoud Hassan, Mohamad Khalil
Abstract:
A comprehensive study of object recognition in the human brain requires combining both spatial and temporal analysis of brain activity. Here, we are mainly interested in three issues: the time perception of visual objects, the ability of discrimination between two particular categories (objects vs. animals), and the possibility to identify a particular spatial representation of visual objects. Our experiment consisted of acquiring dense electroencephalographic (EEG) signals during a picture-naming task comprising a set of objects and animals’ images. These EEG responses were recorded from nine participants. In order to determine the time perception of the presented visual stimulus, we analyzed the Event Related Potentials (ERPs) derived from the recorded EEG signals. The analysis of these signals showed that the brain perceives animals and objects with different time instants. Concerning the discrimination of the two categories, the support vector machine (SVM) was applied on the instantaneous EEG (excellent temporal resolution: on the order of millisecond) to categorize the visual stimuli into two different classes. The spatial differences between the evoked responses of the two categories were also investigated. The results showed a variation of the neural activity with the properties of the visual input. Results showed also the existence of a spatial pattern of electrodes over particular regions of the scalp in correspondence to their responses to the visual inputs.Keywords: brain activity, categorization, dense EEG, evoked responses, spatio-temporal analysis, SVM, time perception
Procedia PDF Downloads 422941 Time Series Forecasting (TSF) Using Various Deep Learning Models
Authors: Jimeng Shi, Mahek Jain, Giri Narasimhan
Abstract:
Time Series Forecasting (TSF) is used to predict the target variables at a future time point based on the learning from previous time points. To keep the problem tractable, learning methods use data from a fixed-length window in the past as an explicit input. In this paper, we study how the performance of predictive models changes as a function of different look-back window sizes and different amounts of time to predict the future. We also consider the performance of the recent attention-based Transformer models, which have had good success in the image processing and natural language processing domains. In all, we compare four different deep learning methods (RNN, LSTM, GRU, and Transformer) along with a baseline method. The dataset (hourly) we used is the Beijing Air Quality Dataset from the UCI website, which includes a multivariate time series of many factors measured on an hourly basis for a period of 5 years (2010-14). For each model, we also report on the relationship between the performance and the look-back window sizes and the number of predicted time points into the future. Our experiments suggest that Transformer models have the best performance with the lowest Mean Average Errors (MAE = 14.599, 23.273) and Root Mean Square Errors (RSME = 23.573, 38.131) for most of our single-step and multi-steps predictions. The best size for the look-back window to predict 1 hour into the future appears to be one day, while 2 or 4 days perform the best to predict 3 hours into the future.Keywords: air quality prediction, deep learning algorithms, time series forecasting, look-back window
Procedia PDF Downloads 154940 Static Analysis of Security Issues of the Python Packages Ecosystem
Authors: Adam Gorine, Faten Spondon
Abstract:
Python is considered the most popular programming language and offers its own ecosystem for archiving and maintaining open-source software packages. This system is called the python package index (PyPI), the repository of this programming language. Unfortunately, one-third of these software packages have vulnerabilities that allow attackers to execute code automatically when a vulnerable or malicious package is installed. This paper contributes to large-scale empirical studies investigating security issues in the python ecosystem by evaluating package vulnerabilities. These provide a series of implications that can help the security of software ecosystems by improving the process of discovering, fixing, and managing package vulnerabilities. The vulnerable dataset is generated using the NVD, the national vulnerability database, and the Snyk vulnerability dataset. In addition, we evaluated 807 vulnerability reports in the NVD and 3900 publicly known security vulnerabilities in Python Package Manager (pip) from the Snyk database from 2002 to 2022. As a result, many Python vulnerabilities appear in high severity, followed by medium severity. The most problematic areas have been improper input validation and denial of service attacks. A hybrid scanning tool that combines the three scanners bandit, snyk and dlint, which provide a clear report of the code vulnerability, is also described.Keywords: Python vulnerabilities, bandit, Snyk, Dlint, Python package index, ecosystem, static analysis, malicious attacks
Procedia PDF Downloads 139939 Communication Styles of Business Students: A Comparison of Four National Cultures
Authors: Tiina Brandt, Isaac Wanasika
Abstract:
Culturally diverse global companies need to understand cultural differences between leaders and employees from different backgrounds. Communication is culturally contingent and has a significant impact on effective execution of leadership goals. The awareness of cultural variations related to communication and interactions will help leaders modify their own behavior, and consequently improve the execution of goals and avoid unnecessary faux pas. Our focus is on young adults that have experienced cultural integration, culturally diverse surroundings in schools and universities, and cultural travels. Our central research problem is to understand the impact of different national cultures on communication. We focus on four countries with distinct national cultures and spatial distribution. The countries are Finland, Indonesia, Russia and USA. Our sample is based on business students (n = 225) from various backgrounds in the four countries. Their responses of communication and leadership styles were analyzed using ANOVA and post-hoc test. Results indicate that culture impacts on communication behavior. Even young culturally-exposed adults with cultural awareness and experience demonstrate cultural differences in their behavior. Apparently, culture is a deeply seated trait that cannot be completely neutralized by environmental variables. Our study offers valuable input for leadership training programs and for expatriates when recognizing specific differences on leaders’ behavior due to culture.Keywords: communication, culture, interaction, leadership
Procedia PDF Downloads 113938 Application of ANN for Estimation of Power Demand of Villages in Sulaymaniyah Governorate
Abstract:
Before designing an electrical system, the estimation of load is necessary for unit sizing and demand-generation balancing. The system could be a stand-alone system for a village or grid connected or integrated renewable energy to grid connection, especially as there are non–electrified villages in developing countries. In the classical model, the energy demand was found by estimating the household appliances multiplied with the amount of their rating and the duration of their operation, but in this paper, information exists for electrified villages could be used to predict the demand, as villages almost have the same life style. This paper describes a method used to predict the average energy consumed in each two months for every consumer living in a village by Artificial Neural Network (ANN). The input data are collected using a regional survey for samples of consumers representing typical types of different living, household appliances and energy consumption by a list of information, and the output data are collected from administration office of Piramagrun for each corresponding consumer. The result of this study shows that the average demand for different consumers from four villages in different months throughout the year is approximately 12 kWh/day, this model estimates the average demand/day for every consumer with a mean absolute percent error of 11.8%, and MathWorks software package MATLAB version 7.6.0 that contains and facilitate Neural Network Toolbox was used.Keywords: artificial neural network, load estimation, regional survey, rural electrification
Procedia PDF Downloads 123937 Hard Disk Failure Predictions in Supercomputing System Based on CNN-LSTM and Oversampling Technique
Authors: Yingkun Huang, Li Guo, Zekang Lan, Kai Tian
Abstract:
Hard disk drives (HDD) failure of the exascale supercomputing system may lead to service interruption and invalidate previous calculations, and it will cause permanent data loss. Therefore, initiating corrective actions before hard drive failures materialize is critical to the continued operation of jobs. In this paper, a highly accurate analysis model based on CNN-LSTM and oversampling technique was proposed, which can correctly predict the necessity of a disk replacement even ten days in advance. Generally, the learning-based method performs poorly on a training dataset with long-tail distribution, especially fault prediction is a very classic situation as the scarcity of failure data. To overcome the puzzle, a new oversampling was employed to augment the data, and then, an improved CNN-LSTM with the shortcut was built to learn more effective features. The shortcut transmits the results of the previous layer of CNN and is used as the input of the LSTM model after weighted fusion with the output of the next layer. Finally, a detailed, empirical comparison of 6 prediction methods is presented and discussed on a public dataset for evaluation. The experiments indicate that the proposed method predicts disk failure with 0.91 Precision, 0.91 Recall, 0.91 F-measure, and 0.90 MCC for 10 days prediction horizon. Thus, the proposed algorithm is an efficient algorithm for predicting HDD failure in supercomputing.Keywords: HDD replacement, failure, CNN-LSTM, oversampling, prediction
Procedia PDF Downloads 79936 Pulsed Laser Single Event Transients in 0.18 μM Partially-Depleted Silicon-On-Insulator Device
Authors: MeiBo, ZhaoXing, LuoLei, YuQingkui, TangMin, HanZhengsheng
Abstract:
The Single Event Transients (SETs) were investigated on 0.18μm PDSOI transistors and 100 series CMOS inverter chain using pulse laser. The effect of different laser energy and device bias for waveform on SET was characterized experimentally, as well as the generation and propagation of SET in inverter chain. In this paper, the effects of struck transistors type and struck locations on SETs were investigated. The results showed that when irradiate NMOSFETs from 100th to 2nd stages, the SET pulse width measured at the output terminal increased from 287.4 ps to 472.9 ps; and when irradiate PMOSFETs from 99th to 1st stages, the SET pulse width increased from 287.4 ps to 472.9 ps. When struck locations were close to the output of the chain, the SET pulse was narrow; however, when struck nodes were close to the input, the SET pulse was broadening. SET pulses were progressively broadened up when propagating along inverter chains. The SET pulse broadening is independent of the type of struck transistors. Through analysis, history effect induced threshold voltage hysteresis in PDSOI is the reason of pulse broadening. The positive pulse observed by oscilloscope, contrary to the expected results, is because of charging and discharging of capacitor.Keywords: single event transients, pulse laser, partially-depleted silicon-on-insulator, propagation-induced pulse broadening effect
Procedia PDF Downloads 412935 Reduction Conditions of Briquetted Solid Wastes Generated by the Integrated Iron and Steel Plant
Authors: Gökhan Polat, Dicle Kocaoğlu Yılmazer, Muhlis Nezihi Sarıdede
Abstract:
Iron oxides are the main input to produce iron in integrated iron and steel plants. During production of iron from iron oxides, some wastes with high iron content occur. These main wastes can be classified as basic oxygen furnace (BOF) sludge, flue dust and rolling scale. Recycling of these wastes has a great importance for both environmental effects and reduction of production costs. In this study, recycling experiments were performed on basic oxygen furnace sludge, flue dust and rolling scale which contain 53.8%, 54.3% and 70.2% iron respectively. These wastes were mixed together with coke as reducer and these mixtures are pressed to obtain cylindrical briquettes. These briquettes were pressed under various compacting forces from 1 ton to 6 tons. Also, both stoichiometric and twice the stoichiometric cokes were added to investigate effect of coke amount on reduction properties of the waste mixtures. Then, these briquettes were reduced at 1000°C and 1100°C during 30, 60, 90, 120 and 150 min in a muffle furnace. According to the results of reduction experiments, the effect of compacting force, temperature and time on reduction ratio of the wastes were determined. It is found that 1 ton compacting force, 150 min reduction time and 1100°C are the optimum conditions to obtain reduction ratio higher than 75%.Keywords: Coke, iron oxide wastes, recycling, reduction
Procedia PDF Downloads 340934 Real-Time Multi-Vehicle Tracking Application at Intersections Based on Feature Selection in Combination with Color Attribution
Authors: Qiang Zhang, Xiaojian Hu
Abstract:
In multi-vehicle tracking, based on feature selection, the tracking system efficiently tracks vehicles in a video with minimal error in combination with color attribution, which focuses on presenting a simple and fast, yet accurate and robust solution to the problem such as inaccurately and untimely responses of statistics-based adaptive traffic control system in the intersection scenario. In this study, a real-time tracking system is proposed for multi-vehicle tracking in the intersection scene. Considering the complexity and application feasibility of the algorithm, in the object detection step, the detection result provided by virtual loops were post-processed and then used as the input for the tracker. For the tracker, lightweight methods were designed to extract and select features and incorporate them into the adaptive color tracking (ACT) framework. And the approbatory online feature selection algorithms are integrated on the mature ACT system with good compatibility. The proposed feature selection methods and multi-vehicle tracking method are evaluated on KITTI datasets and show efficient vehicle tracking performance when compared to the other state-of-the-art approaches in the same category. And the system performs excellently on the video sequences recorded at the intersection. Furthermore, the presented vehicle tracking system is suitable for surveillance applications.Keywords: real-time, multi-vehicle tracking, feature selection, color attribution
Procedia PDF Downloads 163933 Ground Motion Modelling in Bangladesh Using Stochastic Method
Authors: Mizan Ahmed, Srikanth Venkatesan
Abstract:
Geological and tectonic framework indicates that Bangladesh is one of the most seismically active regions in the world. The Bengal Basin is at the junction of three major interacting plates: the Indian, Eurasian, and Burma Plates. Besides there are many active faults within the region, e.g. the large Dauki fault in the north. The country has experienced a number of destructive earthquakes due to the movement of these active faults. Current seismic provisions of Bangladesh are mostly based on earthquake data prior to the 1990. Given the record of earthquakes post 1990, there is a need to revisit the design provisions of the code. This paper compares the base shear demand of three major cities in Bangladesh: Dhaka (the capital city), Sylhet, and Chittagong for earthquake scenarios of magnitudes 7.0MW, 7.5MW, 8.0MW and 8.5MW using a stochastic model. In particular, the stochastic model allows the flexibility to input region specific parameters such as shear wave velocity profile (that were developed from Global Crustal Model CRUST2.0) and include the effects of attenuation as individual components. Effects of soil amplification were analysed using the Extended Component Attenuation Model (ECAM). Results show that the estimated base shear demand is higher in comparison with code provisions leading to the suggestion of additional seismic design consideration in the study regions.Keywords: attenuation, earthquake, ground motion, Stochastic, seismic hazard
Procedia PDF Downloads 249932 Efficiency of Membrane Distillation to Produce Fresh Water
Authors: Sabri Mrayed, David Maccioni, Greg Leslie
Abstract:
Seawater desalination has been accepted as one of the most effective solutions to the growing problem of a diminishing clean drinking water supply. Currently, two desalination technologies dominate the market – the thermally driven multi-stage flash distillation (MSF) and the membrane based reverse osmosis (RO). However, in recent years membrane distillation (MD) has emerged as a potential alternative to the established means of desalination. This research project intended to determine the viability of MD as an alternative process to MSF and RO for seawater desalination. Specifically the project involves conducting a thermodynamic analysis of the process based on the second law of thermodynamics to determine the efficiency of the MD. Data was obtained from experiments carried out on a laboratory rig. In order to determine exergy values required for the exergy analysis, two separate models were built in Engineering Equation Solver – the ’Minimum Separation Work Model’ and the ‘Stream Exergy Model’. The efficiency of MD process was found to be 17.3 %, and the energy consumption was determined to be 4.5 kWh to produce one cubic meter of fresh water. The results indicate MD has potential as a technique for seawater desalination compared to RO and MSF. However, it was shown that this was only the case if an alternate energy source such as green or waste energy was available to provide the thermal energy input to the process. If the process was required to power itself, it was shown to be highly inefficient and in no way thermodynamically viable as a commercial desalination process.Keywords: desalination, exergy, membrane distillation, second law efficiency
Procedia PDF Downloads 364931 Computational Approach for Grp78–Nf-ΚB Binding Interactions in the Context of Neuroprotective Pathway in Brain Injuries
Authors: Janneth Gonzalez, Marco Avila, George Barreto
Abstract:
GRP78 participates in multiple functions in the cell during normal and pathological conditions, controlling calcium homeostasis, protein folding and unfolded protein response. GRP78 is located in the endoplasmic reticulum, but it can change its location under stress, hypoxic and apoptotic conditions. NF-κB represents the keystone of the inflammatory process and regulates the transcription of several genes related with apoptosis, differentiation, and cell growth. The possible relationship between GRP78-NF-κB could support and explain several mechanisms that may regulate a variety of cell functions, especially following brain injuries. Although several reports show interactions between NF-κB and heat shock proteins family members, there is a lack of information on how GRP78 may be interacting with NF-κB, and possibly regulating its downstream activation. Therefore, we assessed the computational predictions of the GRP78 (Chain A) and NF-κB complex (IkB alpha and p65) protein-protein interactions. The interaction interface of the docking model showed that the amino acids ASN 47, GLU 215, GLY 403 of GRP78 and THR 54, ASN 182 and HIS 184 of NF-κB are key residues involved in the docking. The electrostatic field between GRP78-NF-κB interfaces and molecular dynamic simulations support the possible interaction between the proteins. In conclusion, this work shed some light in the possible GRP78-NF-κB complex indicating key residues in this crosstalk, which may be used as an input for better drug design strategy targeting NF-κB downstream signaling as a new therapeutic approach following brain injuries.Keywords: computational biology, protein interactions, Grp78, bioinformatics, molecular dynamics
Procedia PDF Downloads 342930 A Research on the Effect of Soil-Structure Interaction on the Dynamic Response of Symmetrical Reinforced Concrete Buildings
Authors: Adinew Gebremeskel Tizazu
Abstract:
The effect of soil-structure interaction on the dynamic response of reinforced concrete buildings of regular and symmetrical geometry are considered in this study. The structures are presumed to be generally embedded in a homogenous soil formation underlain by very stiff material or bedrock. The structure-foundation–soil system is excited at the base by an earthquake ground motion. The superstructure is idealized as a system with lumped masses concentrated at the floor levels, and coupled with the substructure. The substructure system, which comprises of the foundation and soil, is represented, and replaced by springs and dashpots. Frequency-dependent impedances of the foundation system are incorporated in the discrete model in terms of the springs and dashpots coefficients. The excitation applied to the model is field ground motions of actual earthquake records. Modal superposition principle is employed to transform the equations of motion in geometrical coordinates to modal coordinates. However, the modal equations remain coupled with respect to damping terms due to the difference in damping mechanisms of the superstructure and the soil. Hence, proportional damping for the coupled structural system may not be assumed. An iterative approach is adopted and programmed to solve the system of coupled equations of motion in modal coordinates to obtain the displacement responses of the system. Parametric studies for responses of building structures with regular and symmetric plans of different structural properties and heights are made for fixed and flexible base conditions, for different soil conditions encountered in Addis Ababa. The displacement, base shear and base overturning moments are used in the comparison of different types of structures for various foundation embedment depths, site conditions and height of structures. These values are compared against those of fixed base structure. The study shows that the flexible base structures, generally exhibit different responses from those structures with fixed base. Basically, the natural circular frequencies, the base shears and the inter-story displacements for the flexible base are less than those of the fixed base structures. This trend is particularly evident when the flexible soil has large thickness. In contrast, the trend becomes less predictable, when the thickness of the flexible soil decreases. Moreover, in the latter case, the iteration undulates significantly making the prediction difficult. This is attributed to the highly jagged nature of the impedance functions of frequencies for such formations. In this case, it is difficult to conclude whether the conventional fixed-base approach yields conservative design forces, as is the case for soil formations of large thickness.Keywords: effect of soil structure, dynamic response corroborated, the modal superposition principle, parametric studies
Procedia PDF Downloads 32929 X-Ray Dynamical Diffraction Rocking Curves in Case of Third Order Nonlinear Renninger Effect
Authors: Minas Balyan
Abstract:
In the third-order nonlinear Takagi’s equations for monochromatic waves and in the third-order nonlinear time-dependent dynamical diffraction equations for X-ray pulses for forbidden reflections the Fourier-coefficients of the linear and the third order nonlinear susceptibilities are zero. The dynamical diffraction in the nonlinear case is related to the presence in the nonlinear equations the terms proportional to the zero order and the second order nonzero Fourier coefficients of the third order nonlinear susceptibility. Thus in the third order nonlinear Bragg diffraction case a nonlinear analogue of the well known Renninger effect takes place. In this work, the ‘third order nonlinear Renninger effect’ is considered theoretically and numerically. If the reflection exactly is forbidden the diffracted wave’s amplitude is zero both in Laue and Bragg cases since the boundary conditions and dynamical diffraction equations are compatible with zero solution. But in real crystals due to some percent of dislocations and other localized defects, the atoms are displaced with respect to their equilibrium positions. Thus in real crystals susceptibilities of forbidden reflection are by some order small than for usual not forbidden reflections but are not exactly equal to zero. The numerical calculations for susceptibilities two order less than for not forbidden reflection show that in Bragg geometry case the nonlinear reflection curve’s behavior is the same as for not forbidden reflection, but for forbidden reflection the rocking curves’ width, center and boundaries are two order sensitive on the input intensity value. This gives an opportunity to investigate third order nonlinear X-ray dynamical diffraction for not intense beams – 0.001 in the units of critical intensity.Keywords: third order nonlinearity, Bragg diffraction, nonlinear Renninger effect, rocking curves
Procedia PDF Downloads 406928 Principle Component Analysis on Colon Cancer Detection
Authors: N. K. Caecar Pratiwi, Yunendah Nur Fuadah, Rita Magdalena, R. D. Atmaja, Sofia Saidah, Ocky Tiaramukti
Abstract:
Colon cancer or colorectal cancer is a type of cancer that attacks the last part of the human digestive system. Lymphoma and carcinoma are types of cancer that attack human’s colon. Colon cancer causes deaths about half a million people every year. In Indonesia, colon cancer is the third largest cancer case for women and second in men. Unhealthy lifestyles such as minimum consumption of fiber, rarely exercising and lack of awareness for early detection are factors that cause high cases of colon cancer. The aim of this project is to produce a system that can detect and classify images into type of colon cancer lymphoma, carcinoma, or normal. The designed system used 198 data colon cancer tissue pathology, consist of 66 images for Lymphoma cancer, 66 images for carcinoma cancer and 66 for normal / healthy colon condition. This system will classify colon cancer starting from image preprocessing, feature extraction using Principal Component Analysis (PCA) and classification using K-Nearest Neighbor (K-NN) method. Several stages in preprocessing are resize, convert RGB image to grayscale, edge detection and last, histogram equalization. Tests will be done by trying some K-NN input parameter setting. The result of this project is an image processing system that can detect and classify the type of colon cancer with high accuracy and low computation time.Keywords: carcinoma, colorectal cancer, k-nearest neighbor, lymphoma, principle component analysis
Procedia PDF Downloads 205927 Fuzzy Inference-Assisted Saliency-Aware Convolution Neural Networks for Multi-View Summarization
Authors: Tanveer Hussain, Khan Muhammad, Amin Ullah, Mi Young Lee, Sung Wook Baik
Abstract:
The Big Data generated from distributed vision sensors installed on large scale in smart cities create hurdles in its efficient and beneficial exploration for browsing, retrieval, and indexing. This paper presents a three-folded framework for effective video summarization of such data and provide a compact and representative format of Big Video Data. In the first fold, the paper acquires input video data from the installed cameras and collect clues such as type and count of objects and clarity of the view from a chunk of pre-defined number of frames of each view. The decision of representative view selection for a particular interval is based on fuzzy inference system, acquiring a precise and human resembling decision, reinforced by the known clues as a part of the second fold. In the third fold, the paper forwards the selected view frames to the summary generation mechanism that is supported by a saliency-aware convolution neural network (CNN) model. The new trend of fuzzy rules for view selection followed by CNN architecture for saliency computation makes the multi-view video summarization (MVS) framework a suitable candidate for real-world practice in smart cities.Keywords: big video data analysis, fuzzy logic, multi-view video summarization, saliency detection
Procedia PDF Downloads 188926 Comparison between Deterministic and Probabilistic Stability Analysis, Featuring Consequent Risk Assessment
Authors: Isabela Moreira Queiroz
Abstract:
Slope stability analyses are largely carried out by deterministic methods and evaluated through a single security factor. Although it is known that the geotechnical parameters can present great dispersal, such analyses are considered fixed and known. The probabilistic methods, in turn, incorporate the variability of input key parameters (random variables), resulting in a range of values of safety factors, thus enabling the determination of the probability of failure, which is an essential parameter in the calculation of the risk (probability multiplied by the consequence of the event). Among the probabilistic methods, there are three frequently used methods in geotechnical society: FOSM (First-Order, Second-Moment), Rosenblueth (Point Estimates) and Monte Carlo. This paper presents a comparison between the results from deterministic and probabilistic analyses (FOSM method, Monte Carlo and Rosenblueth) applied to a hypothetical slope. The end was held to evaluate the behavior of the slope and consequent risk analysis, which is used to calculate the risk and analyze their mitigation and control solutions. It can be observed that the results obtained by the three probabilistic methods were quite close. It should be noticed that the calculation of the risk makes it possible to list the priority to the implementation of mitigation measures. Therefore, it is recommended to do a good assessment of the geological-geotechnical model incorporating the uncertainty in viability, design, construction, operation and closure by means of risk management.Keywords: probabilistic methods, risk assessment, risk management, slope stability
Procedia PDF Downloads 391925 Optimization Techniques of Doubly-Fed Induction Generator Controller Design for Reliability Enhancement of Wind Energy Conversion Systems
Authors: Om Prakash Bharti, Aanchal Verma, R. K. Saket
Abstract:
The Doubly-Fed Induction Generator (DFIG) is suggested for Wind Energy Conversion System (WECS) to extract wind power. DFIG is preferably employed due to its robustness towards variable wind and rotor speed. DFIG has the adaptable property because the system parameters are smoothly dealt with, including real power, reactive power, DC-link voltage, and the transient and dynamic responses, which are needed to analyze constantly. The analysis becomes more prominent during any unusual condition in the electrical power system. Hence, the study and improvement in the system parameters and transient response performance of DFIG are required to be accomplished using some controlling techniques. For fulfilling the task, the present work implements and compares the optimization methods for the design of the DFIG controller for WECS. The bio-inspired optimization techniques are applied to get the optimal controller design parameters for DFIG-based WECS. The optimized DFIG controllers are then used to retrieve the transient response performance of the six-order DFIG model with a step input. The results using MATLAB/Simulink show the betterment of the Firefly algorithm (FFA) over other control techniques when compared with the other controller design methods.Keywords: doubly-fed induction generator, wind turbine, wind energy conversion system, induction generator, transfer function, proportional, integral, derivatives
Procedia PDF Downloads 93924 BIM-Based Tool for Sustainability Assessment and Certification Documents Provision
Authors: Taki Eddine Seghier, Mohd Hamdan Ahmad, Yaik-Wah Lim, Samuel Opeyemi Williams
Abstract:
The assessment of building sustainability to achieve a specific green benchmark and the preparation of the required documents in order to receive a green building certification, both are considered as major challenging tasks for green building design team. However, this labor and time-consuming process can take advantage of the available Building Information Modeling (BIM) features such as material take-off and scheduling. Furthermore, the workflow can be automated in order to track potentially achievable credit points and provide rating feedback for several design options by using integrated Visual Programing (VP) to handle the stored parameters within the BIM model. Hence, this study proposes a BIM-based tool that uses Green Building Index (GBI) rating system requirements as a unique input case to evaluate the building sustainability in the design stage of the building project life cycle. The tool covers two key models for data extraction, firstly, a model for data extraction, calculation and the classification of achievable credit points in a green template, secondly, a model for the generation of the required documents for green building certification. The tool was validated on a BIM model of residential building and it serves as proof of concept that building sustainability assessment of GBI certification can be automatically evaluated and documented through BIM.Keywords: green building rating system, GBRS, building information modeling, BIM, visual programming, VP, sustainability assessment
Procedia PDF Downloads 326923 Importance of Macromineral Ratios and Products in Association with Vitamin D in Pediatric Obesity Including Metabolic Syndrome
Authors: Mustafa M. Donma, Orkide Donma
Abstract:
Metabolisms of macrominerals, those of calcium, phosphorus and magnesium, are closely associated with the metabolism of vitamin D. Particularly magnesium, the second most abundant intracellular cation, is related to biochemical and metabolic processes in the body, such as those of carbohydrates, proteins and lipids. The status of each mineral was investigated in obesity to some extent. Their products and ratios may possibly give much more detailed information about the matter. The aim of this study is to investigate possible relations between each macromineral and some obesity-related parameters. This study was performed on 235 children, whose ages were between 06-18 years. Aside from anthropometric measurements, hematological analyses were performed. TANITA body composition monitor using bioelectrical impedance analysis technology was used to establish some obesity-related parameters including basal metabolic rate (BMR), total fat, mineral and muscle masses. World Health Organization body mass index (BMI) percentiles for age and sex were used to constitute the groups. The values above 99th percentile were defined as morbid obesity. Those between 95th and 99th percentiles were included into the obese group. The overweight group comprised of children whose percentiles were between 95 and 85. Children between the 85th and 15th percentiles were defined as normal. Metabolic syndrome (MetS) components (waist circumference, fasting blood glucose, triacylglycerol, high density lipoprotein cholesterol, systolic pressure, diastolic pressure) were determined. High performance liquid chromatography was used to determine Vitamin D status by measuring 25-hydroxy cholecalciferol (25-hydroxy vitamin D3, 25(OH)D). Vitamin D values above 30.0 ng/ml were accepted as sufficient. SPSS statistical package program was used for the evaluation of data. The statistical significance degree was accepted as p < 0.05. The important points were the correlations found between vitamin D and magnesium as well as phosphorus (p < 0.05) that existed in the group with normal BMI values. These correlations were lost in the other groups. The ratio of phosphorus to magnesium was even much more highly correlated with vitamin D (p < 0.001). The negative correlation between magnesium and total fat mass (p < 0.01) was confined to the MetS group showing the inverse relationship between magnesium levels and obesity degree. In this group, calcium*magnesium product exhibited the highest correlation with total fat mass (p < 0.001) among all groups. Only in the MetS group was a negative correlation found between BMR and calcium*magnesium product (p < 0.05). In conclusion, magnesium is located at the center of attraction concerning its relationships with vitamin D, fat mass and MetS. The ratios and products derived from macrominerals including magnesium have pointed out stronger associations other than each element alone. Final considerations have shown that unique correlations of magnesium as well as calcium*magnesium product with total fat mass have drawn attention particularly in the MetS group, possibly due to the derangements in some basic elements of carbohydrate as well as lipid metabolism.Keywords: macrominerals, metabolic syndrome, pediatric obesity, vitamin D
Procedia PDF Downloads 114922 Modeling and Simulation of Pad Surface Topography by Diamond Dressing in Chemical-Mechanical Polishing Process
Authors: A.Chen Chao-Chang, Phong Pham-Quoc
Abstract:
Chemical-mechanical polishing (CMP) process has been widely applied on fabricating integrated circuits (IC) with a soft polishing pad combined with slurry composed of micron or nano-scaled abrasives for generating chemical reaction to remove substrate or film materials from wafer. During CMP process, pad uniformity usually works as a datum surface of wafer planarization and pad asperities can dominate the microscopic pad-slurry-wafer interaction. However, pad topography can be changed by related mechanism factors of CMP and it needs to be re-conditioned or dressed by a diamond dresser of well-distributed diamond grits on a disc surface. It is still very complicated to analyze and understand kinematic of diamond dressing process under the effects of input variables including oscillatory of diamond dresser and rotation speed ratio between the pad and the diamond dresser. This paper has developed a generic geometric model to clarify the kinematic modeling of diamond dressing processes such as dresser/pad motion, pad cutting locus, the relative velocity of the diamond abrasive grits on pad surface, and overlap of cutting for prediction of pad surface topography. Simulation results focus on comparing and analysis kinematics of the diamond dressing on certain CMP tools. Results have shown the significant parameters for diamond dressing process and also discussed. Future study can apply on diamond dresser design and experimental verification of pad dressing process.Keywords: kinematic modeling, diamond dresser, pad cutting locus, CMP
Procedia PDF Downloads 255921 Control System Design for a Simulated Microbial Electrolysis Cell
Authors: Pujari Muruga, T. K. Radhakrishnan, N. Samsudeen
Abstract:
Hydrogen is considered as the most important energy carrier and fuel of the future because of its high energy density and zero emission properties. Microbial Electrolysis Cell (MEC) is a new and promising approach for hydrogen production from organic matter, including wastewater and other renewable resources. By utilizing anode microorganism activity, MEC can produce hydrogen gas with smaller voltages (as low as 0.2 V) than those required for electrolytic hydrogen production ( ≥ 1.23 V). The hydrogen production processes of the MEC reactor are very nonlinear and highly complex because of the presence of microbial interactions and highly complex phenomena in the system. Increasing the hydrogen production rate and lowering the energy input are two important challenges of MEC technology. The mathematical model of the MEC is based on material balance with the integration of bioelectrochemical reactions. The main objective of the research is to produce biohydrogen by selecting the optimum current and controlling applied voltage to the MEC. Precise control is required for the MEC reactor, so that the amount of current required to produce hydrogen gas can be controlled according to the composition of the substrate in the reactor. Various simulation tests involving multiple set-point changes disturbance and noise rejection were performed to evaluate the performance using PID controller tuned with Ziegler Nichols settings. Simulation results shows that other good controller can provide better control effect on the MEC system, so that higher hydrogen production can be obtained.Keywords: microbial electrolysis cell, hydrogen production, applied voltage, PID controller
Procedia PDF Downloads 247920 Automation of AAA Game Development using AI and Procedural Generation
Authors: Paul Toprac, Branden Heng, Harsheni Siddharthan, Allison Tseng, Sarah Abraham, Etienne Vouga
Abstract:
The goal of this project was to evaluate and document the capabilities and limitations of AI tools for empowering small teams to create high budget, high profile (AAA) 3D games typically developed by large studios. Two teams of novice game developers attempted to create two different games using AI and Unreal Engine 5.3. First, the teams evaluated 60 AI art, design, sound, and programming tools by considering their capability, ease of use, cost, and license restrictions. Then, the teams used a shortlist of 13 AI tools for game development. During this process, the following tools were found to be the most productive: (1) ChatGPT 4.0 for both game and narrative concepting and documentation; (2) Dall-E 3 and OpenArt for concept art; (3) Beatoven for music drafting; (4) Epic PCG for level design; and (5) ChatGPT 4.0 and Github Copilot for generating simple code and to complement human-made tutorials as an additional learning resource. While current generative AI may appear impressive at first glance, the assets they produce fall short of AAA industry standards. Generative AI tools are helpful when brainstorming ideas such as concept art and basic storylines, but they still cannot replace human input or creativity at this time. Regarding programming, AI can only effectively generate simple code and act as an additional learning resource. Thus, generative AI tools are at best tools to enhance developer productivity rather than as a system to replace developers.Keywords: AAA games, AI, automation tools, game development
Procedia PDF Downloads 23919 Theoretical Performance of a Sustainable Clean Energy On-Site Generation Device to Convert Consumers into Producers and Its Possible Impact on Electrical National Grids
Authors: Eudes Vera
Abstract:
In this paper, a theoretical evaluation is carried out of the performance of a forthcoming fuel-less clean energy generation device, the Air Motor. The underlying physical principles that support this technology are succinctly described. Examples of the machine and theoretical values of input and output powers are also given. In addition, its main features like portability, on-site energy generation and delivery, miniaturization of generation plants, efficiency, and scaling down of the whole electric infrastructure are discussed. The main component of the Air Motor, the Thermal Air Turbine, generates useful power by converting in mechanical energy part of the thermal energy contained in a fan-produced airflow while leaving intact its kinetic energy. Due to this fact an air motor can contain a long succession of identical air turbines and the total power generated out of a single airflow can be very large, as well as its mechanical efficiency. It is found using the corresponding formulae that the mechanical efficiency of this device can be much greater than 100%, while its thermal efficiency is always less than 100%. On account of its multiple advantages, the Air Motor seems to be the perfect device to convert energy consumers into energy producers worldwide. If so, it would appear that current national electrical grids would no longer be necessary, because it does not seem practical or economical to bring the energy from far-away distances while it can be generated and consumed locally at the consumer’s premises using just the thermal energy contained in the ambient air.Keywords: electrical grid, clean energy, renewable energy, in situ generation and delivery, generation efficiency
Procedia PDF Downloads 175918 New Variational Approach for Contrast Enhancement of Color Image
Authors: Wanhyun Cho, Seongchae Seo, Soonja Kang
Abstract:
In this work, we propose a variational technique for image contrast enhancement which utilizes global and local information around each pixel. The energy functional is defined by a weighted linear combination of three terms which are called on a local, a global contrast term and dispersion term. The first one is a local contrast term that can lead to improve the contrast of an input image by increasing the grey-level differences between each pixel and its neighboring to utilize contextual information around each pixel. The second one is global contrast term, which can lead to enhance a contrast of image by minimizing the difference between its empirical distribution function and a cumulative distribution function to make the probability distribution of pixel values becoming a symmetric distribution about median. The third one is a dispersion term that controls the departure between new pixel value and pixel value of original image while preserving original image characteristics as well as possible. Second, we derive the Euler-Lagrange equation for true image that can achieve the minimum of a proposed functional by using the fundamental lemma for the calculus of variations. And, we considered the procedure that this equation can be solved by using a gradient decent method, which is one of the dynamic approximation techniques. Finally, by conducting various experiments, we can demonstrate that the proposed method can enhance the contrast of colour images better than existing techniques.Keywords: color image, contrast enhancement technique, variational approach, Euler-Lagrang equation, dynamic approximation method, EME measure
Procedia PDF Downloads 449917 Tide Contribution in the Flood Event of Jeddah City: Mathematical Modelling and Different Field Measurements of the Groundwater Rise
Authors: Aïssa Rezzoug
Abstract:
This paper is aimed to bring new elements that demonstrate the tide caused the groundwater to rise in the shoreline band, on which the urban areas occurs, especially in the western coastal cities of the Kingdom of Saudi Arabia like Jeddah. The reason for the last events of Jeddah inundation was the groundwater rise in the city coupled at the same time to a strong precipitation event. This paper will illustrate the tide participation in increasing the groundwater level significantly. It shows that the reason for internal groundwater recharge within the urban area is not only the excess of the water supply coming from surrounding areas, due to the human activity, with lack of sufficient and efficient sewage system, but also due to tide effect. The research study follows a quantitative method to assess groundwater level rise risks through many in-situ measurements and mathematical modelling. The proposed approach highlights groundwater level, in the urban areas of the city on the shoreline band, reaching the high tide level without considering any input from precipitation. Despite the small tide in the Red Sea compared to other oceanic coasts, the groundwater level is considerably enhanced by the tide from the seaside and by the freshwater table from the landside of the city. In these conditions, the groundwater level becomes high in the city and prevents the soil to evacuate quickly enough the surface flow caused by the storm event, as it was observed in the last historical flood catastrophe of Jeddah in 2009.Keywords: flood, groundwater rise, Jeddah, tide
Procedia PDF Downloads 114