Search results for: error level
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14173

Search results for: error level

13603 A Xenon Mass Gauging through Heat Transfer Modeling for Electric Propulsion Thrusters

Authors: A. Soria-Salinas, M.-P. Zorzano, J. Martín-Torres, J. Sánchez-García-Casarrubios, J.-L. Pérez-Díaz, A. Vakkada-Ramachandran

Abstract:

The current state-of-the-art methods of mass gauging of Electric Propulsion (EP) propellants in microgravity conditions rely on external measurements that are taken at the surface of the tank. The tanks are operated under a constant thermal duty cycle to store the propellant within a pre-defined temperature and pressure range. We demonstrate using computational fluid dynamics (CFD) simulations that the heat-transfer within the pressurized propellant generates temperature and density anisotropies. This challenges the standard mass gauging methods that rely on the use of time changing skin-temperatures and pressures. We observe that the domes of the tanks are prone to be overheated, and that a long time after the heaters of the thermal cycle are switched off, the system reaches a quasi-equilibrium state with a more uniform density. We propose a new gauging method, which we call the Improved PVT method, based on universal physics and thermodynamics principles, existing TRL-9 technology and telemetry data. This method only uses as inputs the temperature and pressure readings of sensors externally attached to the tank. These sensors can operate during the nominal thermal duty cycle. The improved PVT method shows little sensitivity to the pressure sensor drifts which are critical towards the end-of-life of the missions, as well as little sensitivity to systematic temperature errors. The retrieval method has been validated experimentally with CO2 in gas and fluid state in a chamber that operates up to 82 bar within a nominal thermal cycle of 38 °C to 42 °C. The mass gauging error is shown to be lower than 1% the mass at the beginning of life, assuming an initial tank load at 100 bar. In particular, for a pressure of about 70 bar, just below the critical pressure of CO2, the error of the mass gauging in gas phase goes down to 0.1% and for 77 bar, just above the critical point, the error of the mass gauging of the liquid phase is 0.6% of initial tank load. This gauging method improves by a factor of 8 the accuracy of the standard PVT retrievals using look-up tables with tabulated data from the National Institute of Standards and Technology.

Keywords: electric propulsion, mass gauging, propellant, PVT, xenon

Procedia PDF Downloads 339
13602 Multi-Level Framework for Effective Use of Stock Ordering System: Case Study of Small Enterprises in Kgautswane

Authors: Lethamaga Tladi, Ray Kekwaletswe

Abstract:

This study sought to conceptualise a multi-level framework for the effective use of stock ordering system in small enterprises in a rural area context. The interpretive research methodology has been used to enable the researcher to analyse, in-depth, and the subjective meanings of small enterprises’ employees in using the stock ordering system. The empirical data was collected from 13 small enterprises’ employees as participants through semi-structured interviews and observations. Interpretive Phenomenological Analysis (IPA) approach was used to analyse the small enterprises’ employee’s own account of lived experiences in relations to stock ordering system use in terms of their relatedness to, and cognitive engagement with. A case study of Kgautswane, a rural area in Limpopo Province, South Africa, served as a social context where the phenomenon manifested. Technology-Organisation-Environment Theory (TOE), Technology-to-Performance Chain Model (TPC), and Representation Theory (RT) underpinned this study. In this multi-level study, the findings revealed that; At the organisational level, the effective use of stock ordering system was found to be associated with the organisational performance gains such as efficiency, productivity, quality, competitiveness, and market share. Equally so, at the individual level, the effective use of stock ordering system minimised the end-user’s efforts and time to accomplish their tasks, which yields improved individual performance. The Multi-level framework for effective use of stock ordering system was presented.

Keywords: effective use, multi-dimensions of use, multi-level of use, multi-level research, small enterprises, stock ordering system

Procedia PDF Downloads 162
13601 Taguchi Approach for the Optimization of the Stitching Defects of Knitted Garments

Authors: Adel El-Hadidy

Abstract:

For any industry, the production and quality management or wastages reductions have major impingement on overall factory economy. This work discusses the quality improvement of garment industry by applying Pareto analysis, cause and effect diagram and Taguchi experimental design. The main purpose of the work is to reduce the stitching defects, which will also minimize the rejection and reworks rate. Application of Pareto chart, fish bone diagram and Process Sigma Level/and or Performance Level tools helps solving those problems on priority basis. Among all, only sewing, defects are responsible form 69.3% to 97.3 % of total defects. Process Sigma level has been improved from 0.79 to 1.3 and performance rate improved, from F to D level. The results showed that the new set of sewing parameters was superior to the original one. It can be seen that fabric size has the largest effect on the sewing defects and that needle size has the smallest effect on the stitching defects.

Keywords: garment, sewing defects, cost of rework, DMAIC, sigma level, cause and effect diagram, Pareto analysis

Procedia PDF Downloads 160
13600 Study of Ground Level Electric Field under 800 kV HVDC Unipolar Laboratory level Transmission line

Authors: K. Urukundu, K. A. Aravind, Pradeep M. Nirgude, K. Sandhya

Abstract:

Transmission of bulk power over a long distance through HVDC transmission lines is gaining importance. This is because the transfer of bulk power through HVDC, from generating stations to load centers over long distances is more economical. However, these HVDC transmission lines create environmental and interference effects under the right of way of the line due to the ionization of the surrounding atmosphere in the vicinity of HVDC lines. The measurement of ground-level electric field and ionic current density is essential for the evaluation of human effects due to electromagnetic interference of the HVDC transmission line. In this paper, experimental laboratory results of the ground-level electric field under the miniature model of 800 kV monopole HVDC line of length 8 meters are presented in lateral configuration with different heights of the conductor from the ground plane. The results are compared with the simulated test results obtained through Finite Element based software.

Keywords: bundle, conductor, hexagonal, transmission line, ground-level electric field

Procedia PDF Downloads 207
13599 Evaluation of the Self-Organizing Map and the Adaptive Neuro-Fuzzy Inference System Machine Learning Techniques for the Estimation of Crop Water Stress Index of Wheat under Varying Application of Irrigation Water Levels for Efficient Irrigation Scheduling

Authors: Aschalew C. Workneh, K. S. Hari Prasad, C. S. P. Ojha

Abstract:

The crop water stress index (CWSI) is a cost-effective, non-destructive, and simple technique for tracking the start of crop water stress. This study investigated the feasibility of CWSI derived from canopy temperature to detect the water status of wheat crops. Artificial intelligence (AI) techniques have become increasingly popular in recent years for determining CWSI. In this study, the performance of two AI techniques, adaptive neuro-fuzzy inference system (ANFIS) and self-organizing maps (SOM), are compared while determining the CWSI of paddy crops. Field experiments were conducted for varying irrigation water applications during two seasons in 2022 and 2023 at the irrigation field laboratory at the Civil Engineering Department, Indian Institute of Technology Roorkee, India. The ANFIS and SOM-simulated CWSI values were compared with the experimentally calculated CWSI (EP-CWSI). Multiple regression analysis was used to determine the upper and lower CWSI baselines. The upper CWSI baseline was found to be a function of crop height and wind speed, while the lower CWSI baseline was a function of crop height, air vapor pressure deficit, and wind speed. The performance of ANFIS and SOM were compared based on mean absolute error (MAE), mean bias error (MBE), root mean squared error (RMSE), index of agreement (d), Nash-Sutcliffe efficiency (NSE), and coefficient of correlation (R²). Both models successfully estimated the CWSI of the paddy crop with higher correlation coefficients and lower statistical errors. However, the ANFIS (R²=0.81, NSE=0.73, d=0.94, RMSE=0.04, MAE= 0.00-1.76 and MBE=-2.13-1.32) outperformed the SOM model (R²=0.77, NSE=0.68, d=0.90, RMSE=0.05, MAE= 0.00-2.13 and MBE=-2.29-1.45). Overall, the results suggest that ANFIS is a reliable tool for accurately determining CWSI in wheat crops compared to SOM.

Keywords: adaptive neuro-fuzzy inference system, canopy temperature, crop water stress index, self-organizing map, wheat

Procedia PDF Downloads 44
13598 Power System Stability Enhancement Using Self Tuning Fuzzy PI Controller for TCSC

Authors: Salman Hameed

Abstract:

In this paper, a self-tuning fuzzy PI controller (STFPIC) is proposed for thyristor controlled series capacitor (TCSC) to improve power system dynamic performance. In a STFPIC controller, the output scaling factor is adjusted on-line by an updating factor (α). The value of α is determined from a fuzzy rule-base defined on error (e) and change of error (Δe) of the controlled variable. The proposed self-tuning controller is designed using a very simple control rule-base and the most natural and unbiased membership functions (MFs) (symmetric triangles with equal base and 50% overlap with neighboring MFs). The comparative performances of the proposed STFPIC and the standard fuzzy PI controller (FPIC) have been investigated on a multi-machine power system (namely, 4 machine two area system) through detailed non-linear simulation studies using MATLAB/SIMULINK. From the simulation studies it has been found out that for damping oscillations, the performance of the proposed STFPIC is better than that obtained by the standard FPIC. Moreover, the proposed STFPIC as well as the FPIC have been found to be quite effective in damping oscillations over a wide range of operating conditions and are quite effective in enhancing the power carrying capability of the power system significantly.

Keywords: genetic algorithm, power system stability, self-tuning fuzzy controller, thyristor controlled series capacitor

Procedia PDF Downloads 419
13597 Board Characteristics, Audit Committee Characteristics, and the Level of Bahraini Corporate Compliance with Mandatory IFRS Disclosure Requirements

Authors: Omar Juhmani

Abstract:

This paper examines the relation between internal corporate governance and the level of corporate compliance with mandatory IFRS disclosure requirements. The internal corporate governance is measured by board and audit committee characteristics. Using data from Bahrain Stock Exchange, the results show that board independence is positively and significantly associated with level of compliance with IFRS disclosure requirements. This suggests that internal corporate governance mechanisms are effective in the financial reporting practices by increasing the level of compliance with IFRS disclosures. Also, the results of the regression analyses indicate that two of the control variables; company size and audit firm size are significantly positively associated with the level of corporate compliance with mandatory IFRS disclosure requirements in Bahrain.

Keywords: Bahrain, board and audit committee characteristics, compliance, disclosure, IFRS

Procedia PDF Downloads 415
13596 Development of Advanced Linear Calibration Technique for Air Flow Sensing by Using CTA-Based Hot Wire Anemometry

Authors: Ming-Jong Tsai, T. M. Wu, R. C. Chu

Abstract:

The purpose of this study is to develop an Advanced linear calibration Technique for air flow sensing by using CTA-based Hot wire Anemometry. It contains a host PC with Human Machine Interface, a wind tunnel, a wind speed controller, an automatic data acquisition module, and nonlinear calibration model. To improve the fitting error by using single fitting polynomial, this study proposes a Multiple three-order Polynomial Fitting Method (MPFM) for fitting the non-linear output of a CTA-based Hot wire Anemometry. The CTA-based anemometer with built-in fitting parameters is installed in the wind tunnel, and the wind speed is controlled by the PC-based controller. The Hot-Wire anemometer's thermistor resistance change is converted into a voltage signal or temperature differences, and then sent to the PC through a DAQ card. After completion measurements of original signal, the Multiple polynomial mathematical coefficients can be automatically calculated, and then sent into the micro-processor in the Hot-Wire anemometer. Finally, the corrected Hot-Wire anemometer is verified for the linearity, the repeatability, error percentage, and the system outputs quality control reports.

Keywords: flow rate sensing, hot wire, constant temperature anemometry (CTA), linear calibration, multiple three-order polynomial fitting method (MPFM), temperature compensation

Procedia PDF Downloads 410
13595 Multivalued Behavior for a Two-Level System Using Homotopy Analysis Method

Authors: Angelo I. Aquino, Luis Ma. T. Bo-ot

Abstract:

We use the Homotopy Analysis Method (HAM) to solve the system of equations modeling the two-level system and extract results which will pinpoint to turbulent behavior. We look at multi-valued solutions as indicative of turbulence or turbulent-like behavior. We take di erent speci c cases which result in multi-valued velocities. The solutions are in series form and application of HAM ensures convergence in some region.

Keywords: multivalued solutions, homotopy analysis method, two-level system, equation

Procedia PDF Downloads 591
13594 Design an Algorithm for Software Development in CBSE Envrionment Using Feed Forward Neural Network

Authors: Amit Verma, Pardeep Kaur

Abstract:

In software development organizations, Component based Software engineering (CBSE) is emerging paradigm for software development and gained wide acceptance as it often results in increase quality of software product within development time and budget. In component reusability, main challenges are the right component identification from large repositories at right time. The major objective of this work is to provide efficient algorithm for storage and effective retrieval of components using neural network and parameters based on user choice through clustering. This research paper aims to propose an algorithm that provides error free and automatic process (for retrieval of the components) while reuse of the component. In this algorithm, keywords (or components) are extracted from software document, after by applying k mean clustering algorithm. Then weights assigned to those keywords based on their frequency and after assigning weights, ANN predicts whether correct weight is assigned to keywords (or components) or not, otherwise it back propagates in to initial step (re-assign the weights). In last, store those all keywords into repositories for effective retrieval. Proposed algorithm is very effective in the error correction and detection with user base choice while choice of component for reusability for efficient retrieval is there.

Keywords: component based development, clustering, back propagation algorithm, keyword based retrieval

Procedia PDF Downloads 374
13593 An Automatic Speech Recognition of Conversational Telephone Speech in Malay Language

Authors: M. Draman, S. Z. Muhamad Yassin, M. S. Alias, Z. Lambak, M. I. Zulkifli, S. N. Padhi, K. N. Baharim, F. Maskuriy, A. I. A. Rahim

Abstract:

The performance of Malay automatic speech recognition (ASR) system for the call centre environment is presented. The system utilizes Kaldi toolkit as the platform to the entire library and algorithm used in performing the ASR task. The acoustic model implemented in this system uses a deep neural network (DNN) method to model the acoustic signal and the standard (n-gram) model for language modelling. With 80 hours of training data from the call centre recordings, the ASR system can achieve 72% of accuracy that corresponds to 28% of word error rate (WER). The testing was done using 20 hours of audio data. Despite the implementation of DNN, the system shows a low accuracy owing to the varieties of noises, accent and dialect that typically occurs in Malaysian call centre environment. This significant variation of speakers is reflected by the large standard deviation of the average word error rate (WERav) (i.e., ~ 10%). It is observed that the lowest WER (13.8%) was obtained from recording sample with a standard Malay dialect (central Malaysia) of native speaker as compared to 49% of the sample with the highest WER that contains conversation of the speaker that uses non-standard Malay dialect.

Keywords: conversational speech recognition, deep neural network, Malay language, speech recognition

Procedia PDF Downloads 317
13592 Seamless MATLAB® to Register-Transfer Level Design Methodology Using High-Level Synthesis

Authors: Petri Solanti, Russell Klein

Abstract:

Many designers are asking for an automated path from an abstract mathematical MATLAB model to a high-quality Register-Transfer Level (RTL) hardware description. Manual transformations of MATLAB or intermediate code are needed, when the design abstraction is changed. Design conversion is problematic as it is multidimensional and it requires many different design steps to translate the mathematical representation of the desired functionality to an efficient hardware description with the same behavior and configurability. Yet, a manual model conversion is not an insurmountable task. Using currently available design tools and an appropriate design methodology, converting a MATLAB model to efficient hardware is a reasonable effort. This paper describes a simple and flexible design methodology that was developed together with several design teams.

Keywords: design methodology, high-level synthesis, MATLAB, verification

Procedia PDF Downloads 134
13591 A Comparative Study of Generalized Autoregressive Conditional Heteroskedasticity (GARCH) and Extreme Value Theory (EVT) Model in Modeling Value-at-Risk (VaR)

Authors: Longqing Li

Abstract:

The paper addresses the inefficiency of the classical model in measuring the Value-at-Risk (VaR) using a normal distribution or a Student’s t distribution. Specifically, the paper focuses on the one day ahead Value-at-Risk (VaR) of major stock market’s daily returns in US, UK, China and Hong Kong in the most recent ten years under 95% confidence level. To improve the predictable power and search for the best performing model, the paper proposes using two leading alternatives, Extreme Value Theory (EVT) and a family of GARCH models, and compares the relative performance. The main contribution could be summarized in two aspects. First, the paper extends the GARCH family model by incorporating EGARCH and TGARCH to shed light on the difference between each in estimating one day ahead Value-at-Risk (VaR). Second, to account for the non-normality in the distribution of financial markets, the paper applies Generalized Error Distribution (GED), instead of the normal distribution, to govern the innovation term. A dynamic back-testing procedure is employed to assess the performance of each model, a family of GARCH and the conditional EVT. The conclusion is that Exponential GARCH yields the best estimate in out-of-sample one day ahead Value-at-Risk (VaR) forecasting. Moreover, the discrepancy of performance between the GARCH and the conditional EVT is indistinguishable.

Keywords: Value-at-Risk, Extreme Value Theory, conditional EVT, backtesting

Procedia PDF Downloads 316
13590 Proposal of Optimality Evaluation for Quantum Secure Communication Protocols by Taking the Average of the Main Protocol Parameters: Efficiency, Security and Practicality

Authors: Georgi Bebrov, Rozalina Dimova

Abstract:

In the field of quantum secure communication, there is no evaluation that characterizes quantum secure communication (QSC) protocols in a complete, general manner. The current paper addresses the problem concerning the lack of such an evaluation for QSC protocols by introducing an optimality evaluation, which is expressed as the average over the three main parameters of QSC protocols: efficiency, security, and practicality. For the efficiency evaluation, the common expression of this parameter is used, which incorporates all the classical and quantum resources (bits and qubits) utilized for transferring a certain amount of information (bits) in a secure manner. By using criteria approach whether or not certain criteria are met, an expression for the practicality evaluation is presented, which accounts for the complexity of the QSC practical realization. Based on the error rates that the common quantum attacks (Measurement and resend, Intercept and resend, probe attack, and entanglement swapping attack) induce, the security evaluation for a QSC protocol is proposed as the minimum function taken over the error rates of the mentioned quantum attacks. For the sake of clarity, an example is presented in order to show how the optimality is calculated.

Keywords: quantum cryptography, quantum secure communcation, quantum secure direct communcation security, quantum secure direct communcation efficiency, quantum secure direct communcation practicality

Procedia PDF Downloads 178
13589 Examining the Missing Feedback Link in Environmental Kuznets Curve Hypothesis

Authors: Apra Sinha

Abstract:

The inverted U-shaped Environmental Kuznets curve (EKC) demonstrates(pollution-income relationship)that initially the pollution and environmental degradation surpass the level of income per capita; however this trend reverses since at the higher income levels, economic growth initiates environmental upgrading. However, what effect does increased environmental degradation has on growth is the missing feedback link which has not been addressed in the EKC hypothesis. This paper examines the missing feedback link in EKC hypothesis in Indian context by examining the casual association between fossil fuel consumption, carbon dioxide emissions and economic growth for India. Fossil fuel consumption here has been taken as a proxy of driver of economic growth. The casual association between the aforementioned variables has been analyzed using five interventions namely 1) urban development for which urbanization has been taken proxy 2) industrial development for which industrial value added has been taken proxy 3) trade liberalization for which sum of exports and imports as a share of GDP has been taken as proxy 4)financial development for which a)domestic credit to private sector and b)net foreign assets has been taken as proxies. The choice of interventions for this study has been done keeping in view the economic liberalization perspective of India. The main aim of the paper is to investigate the missing feedback link for Environmental Kuznets Curve Hypothesis before and after incorporating the intervening variables. The period of study is from 1971 to 2011 as it covers pre and post liberalization era in India. All the data has been taken from World Bank country level indicators. The Johansen and Juselius cointegration testing methodology and Error Correction based Granger causality have been applied on all the variables. The results clearly show that out of five interventions, only in two interventions the missing feedback link is being addressed. This paper can put forward significant policy implications for environment protection and sustainable development.

Keywords: environmental Kuznets curve hypothesis, fossil fuel consumption, industrialization, trade liberalization, urbanization

Procedia PDF Downloads 245
13588 A Comparative Evaluation of the SIR and SEIZ Epidemiological Models to Describe the Diffusion Characteristics of COVID-19 Polarizing Viewpoints on Online

Authors: Maryam Maleki, Esther Mead, Mohammad Arani, Nitin Agarwal

Abstract:

This study is conducted to examine how opposing viewpoints related to COVID-19 were diffused on Twitter. To accomplish this, six datasets using two epidemiological models, SIR (Susceptible, Infected, Recovered) and SEIZ (Susceptible, Exposed, Infected, Skeptics), were analyzed. The six datasets were chosen because they represent opposing viewpoints on the COVID-19 pandemic. Three of the datasets contain anti-subject hashtags, while the other three contain pro-subject hashtags. The time frame for all datasets is three years, starting from January 2020 to December 2022. The findings revealed that while both models were effective in evaluating the propagation trends of these polarizing viewpoints, the SEIZ model was more accurate with a relatively lower error rate (6.7%) compared to the SIR model (17.3%). Additionally, the relative error for both models was lower for anti-subject hashtags compared to pro-subject hashtags. By leveraging epidemiological models, insights into the propagation trends of polarizing viewpoints on Twitter were gained. This study paves the way for the development of methods to prevent the spread of ideas that lack scientific evidence while promoting the dissemination of scientifically backed ideas.

Keywords: mathematical modeling, epidemiological model, seiz model, sir model, covid-19, twitter, social network analysis, social contagion

Procedia PDF Downloads 54
13587 Bitplanes Gray-Level Image Encryption Approach Using Arnold Transform

Authors: Ali Abdrhman M. Ukasha

Abstract:

Data security needed in data transmission, storage, and communication to ensure the security. The single step parallel contour extraction (SSPCE) method is used to create the edge map as a key image from the different Gray level/Binary image. Performing the X-OR operation between the key image and each bit plane of the original image for image pixel values change purpose. The Arnold transform used to changes the locations of image pixels as image scrambling process. Experiments have demonstrated that proposed algorithm can fully encrypt 2D Gary level image and completely reconstructed without any distortion. Also shown that the analyzed algorithm have extremely large security against some attacks like salt & pepper and JPEG compression. Its proof that the Gray level image can be protected with a higher security level. The presented method has easy hardware implementation and suitable for multimedia protection in real time applications such as wireless networks and mobile phone services.

Keywords: SSPCE method, image compression-salt- peppers attacks, bitplanes decomposition, Arnold transform, lossless image encryption

Procedia PDF Downloads 429
13586 Effects of Various Wavelet Transforms in Dynamic Analysis of Structures

Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar

Abstract:

Time history dynamic analysis of structures is considered as an exact method while being computationally intensive. Filtration of earthquake strong ground motions applying wavelet transform is an approach towards reduction of computational efforts, particularly in optimization of structures against seismic effects. Wavelet transforms are categorized into continuum and discrete transforms. Since earthquake strong ground motion is a discrete function, the discrete wavelet transform is applied in the present paper. Wavelet transform reduces analysis time by filtration of non-effective frequencies of strong ground motion. Filtration process may be repeated several times while the approximation induces more errors. In this paper, strong ground motion of earthquake has been filtered once applying each wavelet. Strong ground motion of Northridge earthquake is filtered applying various wavelets and dynamic analysis of sampled shear and moment frames is implemented. The error, regarding application of each wavelet, is computed based on comparison of dynamic response of sampled structures with exact responses. Exact responses are computed by dynamic analysis of structures applying non-filtered strong ground motion.

Keywords: wavelet transform, computational error, computational duration, strong ground motion data

Procedia PDF Downloads 371
13585 Surface Erosion and Slope Stability Assessment of Cut and Fill Slope

Authors: Kongrat Nokkaew

Abstract:

This article assessed the surface erosion and stability of cut and fill slope in the excavation of the detention basin, Kalasin Province, Thailand. The large excavation project was built to enlarge detention basin for relieving repeated flooding and drought which usually happen in this area. However, at the end of the 1st rainstorm season, severely erosions slope failures were widespread observed. After investigation, the severity of erosions and slope failure were classified into five level from sheet erosion (Level 1), rill erosion (Level 2, 3), gully erosion (Level 4), and slope failure (Level 5) for proposing slope remediation. The preliminary investigation showed that lack of runoff control were the major factors of the surface erosions while insufficient compacted of the fill slope leaded to slopes failures. The slope stability of four selected slope failure was back calculated by using Simplified Bishop with Seep-W. The result show that factor of safety of slope located on non-plasticity sand was less than one, representing instability of the embankment slope. Such analysis agreed well with the failures observed in the field.

Keywords: surface erosion, slope stability, detention basin, cut and fill

Procedia PDF Downloads 353
13584 "Good" Discretion Among Private Sector Street Level Bureaucrats

Authors: Anna K. Wood, Terri Friedline

Abstract:

In April and May 2020, the private banking industry approved over 1.7 million emergency small business loans, totaling over $650 billion in federal relief funds as part of the Paycheck Protection Program (PPP). Since the program’s rollout, the extensive evidence of discriminatory lending and misuse of funds has been revealed by investigative journalism and academic studies. This study is based on 41 interviews with frontline banking industry professionals conducted during the days and weeks of the PPP rollout, presenting a real-time narrative of the program rollout through the eyes of those in the role of a street-level bureaucrat. We present two themes from this data about the conditions under which these frontline workers experienced the PPP: Exigent Timelines and Defaulting to Existing Workplace Norms and Practices. We analyze these themes using literature on street-level organizations, bureaucratic discretion, and the differences between public and private sector logic. The results of this study present new directions for theorizing sector-level differences in street-level bureaucratic discretion in the context of mixed-sector collaboration on public service delivery, particularly under conditions of crisis and urgency.

Keywords: street level bureaucracy, social policy, bureaucratic discretion, public private partnerships

Procedia PDF Downloads 98
13583 Towards Automatic Calibration of In-Line Machine Processes

Authors: David F. Nettleton, Elodie Bugnicourt, Christian Wasiak, Alejandro Rosales

Abstract:

In this presentation, preliminary results are given for the modeling and calibration of two different industrial winding MIMO (Multiple Input Multiple Output) processes using machine learning techniques. In contrast to previous approaches which have typically used ‘black-box’ linear statistical methods together with a definition of the mechanical behavior of the process, we use non-linear machine learning algorithms together with a ‘white-box’ rule induction technique to create a supervised model of the fitting error between the expected and real force measures. The final objective is to build a precise model of the winding process in order to control de-tension of the material being wound in the first case, and the friction of the material passing through the die, in the second case. Case 1, Tension Control of a Winding Process. A plastic web is unwound from a first reel, goes over a traction reel and is rewound on a third reel. The objectives are: (i) to train a model to predict the web tension and (ii) calibration to find the input values which result in a given tension. Case 2, Friction Force Control of a Micro-Pullwinding Process. A core+resin passes through a first die, then two winding units wind an outer layer around the core, and a final pass through a second die. The objectives are: (i) to train a model to predict the friction on die2; (ii) calibration to find the input values which result in a given friction on die2. Different machine learning approaches are tested to build models, Kernel Ridge Regression, Support Vector Regression (with a Radial Basis Function Kernel) and MPART (Rule Induction with continuous value as output). As a previous step, the MPART rule induction algorithm was used to build an explicative model of the error (the difference between expected and real friction on die2). The modeling of the error behavior using explicative rules is used to help improve the overall process model. Once the models are built, the inputs are calibrated by generating Gaussian random numbers for each input (taking into account its mean and standard deviation) and comparing the output to a target (desired) output until a closest fit is found. The results of empirical testing show that a high precision is obtained for the trained models and for the calibration process. The learning step is the slowest part of the process (max. 5 minutes for this data), but this can be done offline just once. The calibration step is much faster and in under one minute obtained a precision error of less than 1x10-3 for both outputs. To summarize, in the present work two processes have been modeled and calibrated. A fast processing time and high precision has been achieved, which can be further improved by using heuristics to guide the Gaussian calibration. Error behavior has been modeled to help improve the overall process understanding. This has relevance for the quick optimal set up of many different industrial processes which use a pull-winding type process to manufacture fibre reinforced plastic parts. Acknowledgements to the Openmind project which is funded by Horizon 2020 European Union funding for Research & Innovation, Grant Agreement number 680820

Keywords: data model, machine learning, industrial winding, calibration

Procedia PDF Downloads 238
13582 Developing an ANN Model to Predict Anthropometric Dimensions Based on Real Anthropometric Database

Authors: Waleed A. Basuliman, Khalid S. AlSaleh, Mohamed Z. Ramadan

Abstract:

Applying the anthropometric dimensions is considered one of the important factors when designing any human-machine system. In this study, the estimation of anthropometric dimensions has been improved by developing artificial neural network that aims to predict the anthropometric measurements of the male in Saudi Arabia. A total of 1427 Saudi males from age 6 to 60 participated in measuring twenty anthropometric dimensions. These anthropometric measurements are important for designing the majority of work and life applications in Saudi Arabia. The data were collected during 8 months from different locations in Riyadh City. Five of these dimensions were used as predictors variables (inputs) of the model, and the remaining fifteen dimensions were set to be the measured variables (outcomes). The hidden layers have been varied during the structuring stage, and the best performance was achieved with the network structure 6-25-15. The results showed that the developed Neural Network model was significantly able to predict the body dimensions for the population of Saudi Arabia. The network mean absolute percentage error (MAPE) and the root mean squared error (RMSE) were found 0.0348 and 3.225 respectively. The accuracy of the developed neural network was evaluated by compare the predicted outcomes with a multiple regression model. The ANN model performed better and resulted excellent correlation coefficients between the predicted and actual dimensions.

Keywords: artificial neural network, anthropometric measurements, backpropagation, real anthropometric database

Procedia PDF Downloads 568
13581 Enhancing Signal Reception in a Mobile Radio Network Using Adaptive Beamforming Antenna Arrays Technology

Authors: Ugwu O. C., Mamah R. O., Awudu W. S.

Abstract:

This work is aimed at enhancing signal reception on a mobile radio network and minimizing outage probability in a mobile radio network using adaptive beamforming antenna arrays. In this research work, an empirical real-time drive measurement was done in a cellular network of Globalcom Nigeria Limited located at Ikeja, the headquarters of Lagos State, Nigeria, with reference base station number KJA 004. The empirical measurement includes Received Signal Strength and Bit Error Rate which were recorded for exact prediction of the signal strength of the network as at the time of carrying out this research work. The Received Signal Strength and Bit Error Rate were measured with a spectrum monitoring Van with the help of a Ray Tracer at an interval of 100 meters up to 700 meters from the transmitting base station. The distance and angular location measurements from the reference network were done with the help Global Positioning System (GPS). The other equipment used were transmitting equipment measurements software (Temsoftware), Laptops and log files, which showed received signal strength with distance from the base station. Results obtained were about 11% from the real-time experiment, which showed that mobile radio networks are prone to signal failure and can be minimized using an Adaptive Beamforming Antenna Array in terms of a significant reduction in Bit Error Rate, which implies improved performance of the mobile radio network. In addition, this work did not only include experiments done through empirical measurement but also enhanced mathematical models that were developed and implemented as a reference model for accurate prediction. The proposed signal models were based on the analysis of continuous time and discrete space, and some other assumptions. These developed (proposed) enhanced models were validated using MATLAB (version 7.6.3.35) program and compared with the conventional antenna for accuracy. These outage models were used to manage the blocked call experience in the mobile radio network. 20% improvement was obtained when the adaptive beamforming antenna arrays were implemented on the wireless mobile radio network.

Keywords: beamforming algorithm, adaptive beamforming, simulink, reception

Procedia PDF Downloads 29
13580 A Study on the Influence of Aswan High Dam Reservoir Loading on Earthquake Activity

Authors: Sayed Abdallah Mohamed Dahy

Abstract:

Aswan High Dam Reservoir extends for 500 km along the Nile River; it is a vast reservoir in southern Egypt and northern Sudan. It was created as a result of the construction of the Aswan High Dam between 1958 and 1970; about 95% of the main water resources for Egypt are from it. The purpose of this study is to discuss and understand the effect of the fluctuation of the water level in the reservoir on natural and human-induced environmental like earthquakes in the Aswan area, Egypt. In summary, the correlation between the temporal variations of earthquake activity and water level changes in the Aswan reservoir from 1982 to 2014 are investigated and analyzed. This analysis confirms a weak relation between the fluctuation of the water level and earthquake activity in the area around Aswan reservoir. The result suggests that the seismicity in the area becomes active during a period when the water level is decreasing from the maximum to the minimum. Behavior of the water level in this reservoir characterized by a special manner that is the unloading season extends to July or August, and the loading season starts to reach its maximum in October or November every year. Finally, daily rate of change in the water level did not show any direct relation with the size of the earthquakes, hence, it is not possible to be used as a single tool for prediction.

Keywords: Aswan high dam reservoir, earthquake activity, environmental, Egypt

Procedia PDF Downloads 372
13579 The Study of Personal Participation in Educational Quality Assurance: Case Study of Programs in Graduate School, Suan Sunandha Rajabhat University

Authors: Nopadol Burananat, Kedsara Tripaichayonsak

Abstract:

This research aims to study the level of expectations and participation of personnel in implementing educational quality assurance of programs in Graduate School, Rajabhat Suan Sunandha University. The sample used in this study is 60 participants. The tool used for data collection is a questionnaire constructed by the researcher. The analysis is done by frequency, percentage, mean and standard deviation. It was found that the level of expectations personnel in Graduate School, Suan Sunandha Rajabhat University in implementing educational quality assurance is at high level. The category which received the most score is Action, followed by Check, Do and Plan, respectively. For the level of participation of personnel at program level of Graduate School, Suan Sunandha Rajabhat University in implementing educational quality assurance, the overall score is at high level. The category which received the most score is Action, followed by Do, Check and Plan, respectively.

Keywords: participation, implementation of educational quality assurance, educational quality assurance, expectations and participation

Procedia PDF Downloads 379
13578 The Effects of Signal Level of the Microwave Generator on the Brillouin Gain Spectrum in BOTDA and BOTDR

Authors: Murat Yucel, Murat Yucel, Nail Ferhat Ozturk, Halim Haldun Goktas, Cemal Gemci, Fatih Vehbi Celebi

Abstract:

In this study, Brillouin gain spectrum (BGS) is experimentally analyzed in the Brillouin optical time domain reflectometry (BOTDR) and Brillouin optical time domain analyzer (BOTDA). For this purpose, the signal level of the microwave generator is varied and the effects of BGS are investigated. In the setups, 20 km conventional single mode fiber is used to both setups and laser wavelengths are selected around 1550 nm. To achieve best results, it can be used between 5 dBm to 15 dBm signal level of microwave generator for BOTDA and BOTDR setups.

Keywords: microwave signal level, Brillouin gain spectrum, BOTDA, BOTDR

Procedia PDF Downloads 681
13577 River's Bed Level Changing Pattern Due to Sedimentation, Case Study: Gash River, Kassala, Sudan

Authors: Faisal Ali, Hasssan Saad Mohammed Hilmi, Mustafa Mohamed, Shamseddin Musa

Abstract:

The Gash rivers an ephemeral river, it usually flows from July to September, it has a braided pattern with high sediment content, of 15200 ppm in suspension, and 360 kg/sec as bed load. The Gash river bed has an average slope of 1.3 m/Km. The objectives of this study were: assessing the Gash River bed level patterns; quantifying the annual variations in Gash bed level; and recommending a suitable method to reduce the sediment accumulation on the Gash River bed. The study covered temporally the period 1905-2013 using datasets included the Gash river flows, and the cross sections. The results showed that there is an increasing trend in the river bed of 5 cm3 per year. This is resulted in changing the behavior of the flood routing and consequently the flood hazard is tremendously increased in Kassala city.

Keywords: bed level, cross section, gash river, sedimentation

Procedia PDF Downloads 533
13576 Measuring the Height of a Person in Closed Circuit Television Video Footage Using 3D Human Body Model

Authors: Dojoon Jung, Kiwoong Moon, Joong Lee

Abstract:

The height of criminals is one of the important clues that can determine the scope of the suspect's search or exclude the suspect from the search target. Although measuring the height of criminals by video alone is limited by various reasons, the 3D data of the scene and the Closed Circuit Television (CCTV) footage are matched, the height of the criminal can be measured. However, it is still difficult to measure the height of CCTV footage in the non-contact type measurement method because of variables such as position, posture, and head shape of criminals. In this paper, we propose a method of matching the CCTV footage with the 3D data on the crime scene and measuring the height of the person using the 3D human body model in the matched data. In the proposed method, the height is measured by using 3D human model in various scenes of the person in the CCTV footage, and the measurement value of the target person is corrected by the measurement error of the replay CCTV footage of the reference person. We tested for 20 people's walking CCTV footage captured from an indoor and an outdoor and corrected the measurement values with 5 reference persons. Experimental results show that the measurement error (true value-measured value) average is 0.45 cm, and this method is effective for the measurement of the person's height in CCTV footage.

Keywords: human height, CCTV footage, 2D/3D matching, 3D human body model

Procedia PDF Downloads 245
13575 Performance Assessment of Multi-Level Ensemble for Multi-Class Problems

Authors: Rodolfo Lorbieski, Silvia Modesto Nassar

Abstract:

Many supervised machine learning tasks require decision making across numerous different classes. Multi-class classification has several applications, such as face recognition, text recognition and medical diagnostics. The objective of this article is to analyze an adapted method of Stacking in multi-class problems, which combines ensembles within the ensemble itself. For this purpose, a training similar to Stacking was used, but with three levels, where the final decision-maker (level 2) performs its training by combining outputs from the tree-based pair of meta-classifiers (level 1) from Bayesian families. These are in turn trained by pairs of base classifiers (level 0) of the same family. This strategy seeks to promote diversity among the ensembles forming the meta-classifier level 2. Three performance measures were used: (1) accuracy, (2) area under the ROC curve, and (3) time for three factors: (a) datasets, (b) experiments and (c) levels. To compare the factors, ANOVA three-way test was executed for each performance measure, considering 5 datasets by 25 experiments by 3 levels. A triple interaction between factors was observed only in time. The accuracy and area under the ROC curve presented similar results, showing a double interaction between level and experiment, as well as for the dataset factor. It was concluded that level 2 had an average performance above the other levels and that the proposed method is especially efficient for multi-class problems when compared to binary problems.

Keywords: stacking, multi-layers, ensemble, multi-class

Procedia PDF Downloads 263
13574 A Weighted Sum Particle Swarm Approach (WPSO) Combined with a Novel Feasibility-Based Ranking Strategy for Constrained Multi-Objective Optimization of Compact Heat Exchangers

Authors: Milad Yousefi, Moslem Yousefi, Ricarpo Poley, Amer Nordin Darus

Abstract:

Design optimization of heat exchangers is a very complicated task that has been traditionally carried out based on a trial-and-error procedure. To overcome the difficulties of the conventional design approaches especially when a large number of variables, constraints and objectives are involved, a new method based on a well-stablished evolutionary algorithm, particle swarm optimization (PSO), weighted sum approach and a novel constraint handling strategy is presented in this study. Since, the conventional constraint handling strategies are not effective and easy-to-implement in multi-objective algorithms, a novel feasibility-based ranking strategy is introduced which is both extremely user-friendly and effective. A case study from industry has been investigated to illustrate the performance of the presented approach. The results show that the proposed algorithm can find the near pareto-optimal with higher accuracy when it is compared to conventional non-dominated sorting genetic algorithm II (NSGA-II). Moreover, the difficulties of a trial-and-error process for setting the penalty parameters is solved in this algorithm.

Keywords: Heat exchanger, Multi-objective optimization, Particle swarm optimization, NSGA-II Constraints handling.

Procedia PDF Downloads 551